Search results for: Hidden Markov Model (HMM)
16990 Estimation of Chronic Kidney Disease Using Artificial Neural Network
Authors: Ilker Ali Ozkan
Abstract:
In this study, an artificial neural network model has been developed to estimate chronic kidney failure which is a common disease. The patients’ age, their blood and biochemical values, and 24 input data which consists of various chronic diseases are used for the estimation process. The input data have been subjected to preprocessing because they contain both missing values and nominal values. 147 patient data which was obtained from the preprocessing have been divided into as 70% training and 30% testing data. As a result of the study, artificial neural network model with 25 neurons in the hidden layer has been found as the model with the lowest error value. Chronic kidney failure disease has been able to be estimated accurately at the rate of 99.3% using this artificial neural network model. The developed artificial neural network has been found successful for the estimation of chronic kidney failure disease using clinical data.Keywords: estimation, artificial neural network, chronic kidney failure disease, disease diagnosis
Procedia PDF Downloads 44716989 Genome Sequencing of the Yeast Saccharomyces cerevisiae Strain 202-3
Authors: Yina A. Cifuentes Triana, Andrés M. Pinzón Velásco, Marío E. Velásquez Lozano
Abstract:
In this work the sequencing and genome characterization of a natural isolate of Saccharomyces cerevisiae yeast (strain 202-3), identified with potential for the production of second generation ethanol from sugarcane bagasse hydrolysates is presented. This strain was selected because its capability to consume xylose during the fermentation of sugarcane bagasse hydrolysates, taking into account that many strains of S. cerevisiae are incapable of processing this sugar. This advantage and other prominent positive aspects during fermentation profiles evaluated in bagasse hydrolysates made the strain 202-3 a candidate strain to improve the production of second-generation ethanol, which was proposed as a first step to study the strain at the genomic level. The molecular characterization was carried out by genome sequencing with the Illumina HiSeq 2000 platform paired end; the assembly was performed with different programs, finally choosing the assembler ABYSS with kmer 89. Gene prediction was developed with the approach of hidden Markov models with Augustus. The genes identified were scored based on similarity with public databases of nucleotide and protein. Records were organized from ontological functions at different hierarchical levels, which identified central metabolic functions and roles of the S. cerevisiae strain 202-3, highlighting the presence of four possible new proteins, two of them probably associated with the positive consumption of xylose.Keywords: cellulosic ethanol, Saccharomyces cerevisiae, genome sequencing, xylose consumption
Procedia PDF Downloads 32016988 Artificial Neural Network to Predict the Optimum Performance of Air Conditioners under Environmental Conditions in Saudi Arabia
Authors: Amr Sadek, Abdelrahaman Al-Qahtany, Turkey Salem Al-Qahtany
Abstract:
In this study, a backpropagation artificial neural network (ANN) model has been used to predict the cooling and heating capacities of air conditioners (AC) under different conditions. Sufficiently large measurement results were obtained from the national energy-efficiency laboratories in Saudi Arabia and were used for the learning process of the ANN model. The parameters affecting the performance of the AC, including temperature, humidity level, specific heat enthalpy indoors and outdoors, and the air volume flow rate of indoor units, have been considered. These parameters were used as inputs for the ANN model, while the cooling and heating capacity values were set as the targets. A backpropagation ANN model with two hidden layers and one output layer could successfully correlate the input parameters with the targets. The characteristics of the ANN model including the input-processing, transfer, neurons-distance, topology, and training functions have been discussed. The performance of the ANN model was monitored over the training epochs and assessed using the mean squared error function. The model was then used to predict the performance of the AC under conditions that were not included in the measurement results. The optimum performance of the AC was also predicted under the different environmental conditions in Saudi Arabia. The uncertainty of the ANN model predictions has been evaluated taking into account the randomness of the data and lack of learning.Keywords: artificial neural network, uncertainty of model predictions, efficiency of air conditioners, cooling and heating capacities
Procedia PDF Downloads 7416987 Optimal Bayesian Chart for Controlling Expected Number of Defects in Production Processes
Abstract:
In this paper, we develop an optimal Bayesian chart to control the expected number of defects per inspection unit in production processes with long production runs. We formulate this control problem in the optimal stopping framework. The objective is to determine the optimal stopping rule minimizing the long-run expected average cost per unit time considering partial information obtained from the process sampling at regular epochs. We prove the optimality of the control limit policy, i.e., the process is stopped and the search for assignable causes is initiated when the posterior probability that the process is out of control exceeds a control limit. An algorithm in the semi-Markov decision process framework is developed to calculate the optimal control limit and the corresponding average cost. Numerical examples are presented to illustrate the developed optimal control chart and to compare it with the traditional u-chart.Keywords: Bayesian u-chart, economic design, optimal stopping, semi-Markov decision process, statistical process control
Procedia PDF Downloads 57316986 Integrating Knowledge Distillation of Multiple Strategies
Authors: Min Jindong, Wang Mingxia
Abstract:
With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.Keywords: object detection, knowledge distillation, convolutional network, model compression
Procedia PDF Downloads 27816985 E-Consumers’ Attribute Non-Attendance Switching Behavior: Effect of Providing Information on Attributes
Authors: Leonard Maaya, Michel Meulders, Martina Vandebroek
Abstract:
Discrete Choice Experiments (DCE) are used to investigate how product attributes affect decision-makers’ choices. In DCEs, choice situations consisting of several alternatives are presented from which choice-makers select the preferred alternative. Standard multinomial logit models based on random utility theory can be used to estimate the utilities for the attributes. The overarching principle in these models is that respondents understand and use all the attributes when making choices. However, studies suggest that respondents sometimes ignore some attributes (commonly referred to as Attribute Non-Attendance/ANA). The choice modeling literature presents ANA as a static process, i.e., respondents’ ANA behavior does not change throughout the experiment. However, respondents may ignore attributes due to changing factors like availability of information on attributes, learning/fatigue in experiments, etc. We develop a dynamic mixture latent Markov model to model changes in ANA when information on attributes is provided. The model is illustrated on e-consumers’ webshop choices. The results indicate that the dynamic ANA model describes the behavioral changes better than modeling the impact of information using changes in parameters. Further, we find that providing information on attributes leads to an increase in the attendance probabilities for the investigated attributes.Keywords: choice models, discrete choice experiments, dynamic models, e-commerce, statistical modeling
Procedia PDF Downloads 14016984 Artificial Neural Network Approach for Modeling and Optimization of Conidiospore Production of Trichoderma harzianum
Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Alejandro Tellez-Jurado, Juan C. Seck-Tuoh-Mora, Eva S. Hernandez-Gress, Norberto Hernandez-Romero, Iaina P. Medina-Serna
Abstract:
Trichoderma harzianum is a fungus that has been utilized as a low-cost fungicide for biological control of pests, and it is important to determine the optimal conditions to produce the highest amount of conidiospores of Trichoderma harzianum. In this work, the conidiospore production of Trichoderma harzianum is modeled and optimized by using Artificial Neural Networks (AANs). In order to gather data of this process, 30 experiments were carried out taking into account the number of hours of culture (10 distributed values from 48 to 136 hours) and the culture humidity (70, 75 and 80 percent), obtained as a response the number of conidiospores per gram of dry mass. The experimental results were used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers, and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The ANN with the best performance was chosen in order to simulate the process and be able to maximize the conidiospores production. The obtained ANN with the highest performance has 2 inputs and 1 output, three hidden layers with 3, 10 and 10 neurons in each layer, respectively. The ANN performance shows an R2 value of 0.9900, and the Root Mean Squared Error is 1.2020. This ANN predicted that 644175467 conidiospores per gram of dry mass are the maximum amount obtained in 117 hours of culture and 77% of culture humidity. In summary, the ANN approach is suitable to represent the conidiospores production of Trichoderma harzianum because the R2 value denotes a good fitting of experimental results, and the obtained ANN model was used to find the parameters to produce the biggest amount of conidiospores per gram of dry mass.Keywords: Trichoderma harzianum, modeling, optimization, artificial neural network
Procedia PDF Downloads 15816983 Human Action Recognition Using Variational Bayesian HMM with Dirichlet Process Mixture of Gaussian Wishart Emission Model
Authors: Wanhyun Cho, Soonja Kang, Sangkyoon Kim, Soonyoung Park
Abstract:
In this paper, we present the human action recognition method using the variational Bayesian HMM with the Dirichlet process mixture (DPM) of the Gaussian-Wishart emission model (GWEM). First, we define the Bayesian HMM based on the Dirichlet process, which allows an infinite number of Gaussian-Wishart components to support continuous emission observations. Second, we have considered an efficient variational Bayesian inference method that can be applied to drive the posterior distribution of hidden variables and model parameters for the proposed model based on training data. And then we have derived the predictive distribution that may be used to classify new action. Third, the paper proposes a process of extracting appropriate spatial-temporal feature vectors that can be used to recognize a wide range of human behaviors from input video image. Finally, we have conducted experiments that can evaluate the performance of the proposed method. The experimental results show that the method presented is more efficient with human action recognition than existing methods.Keywords: human action recognition, Bayesian HMM, Dirichlet process mixture model, Gaussian-Wishart emission model, Variational Bayesian inference, prior distribution and approximate posterior distribution, KTH dataset
Procedia PDF Downloads 35316982 Identifying the Hidden Curriculum Components in the Nursing Education
Authors: Alice Khachian, Shoaleh Bigdeli, Azita Shoghie, Leili Borimnejad
Abstract:
Background and aim: The hidden curriculum is crucial in nursing education and can determine professionalism and professional competence. It has a significant effect on their moral performance in relation to patients. The present study was conducted with the aim of identifying the hidden curriculum components in the nursing and midwifery faculty. Methodology: The ethnographic study was conducted over two years using the Spradley method in one of the nursing schools located in Tehran. In this focused ethnographic research, the approach of Lincoln and Goba, i.e., transferability, confirmability, and dependability, was used. To increase the validity of the data, they were collected from different sources, such as participatory observation, formal and informal interviews, and document review. Two hundred days of participatory observation, fifty informal interviews, and fifteen formal interviews from the maximum opportunities and conditions available to obtain multiple and multilateral information added to the validity of the data. Due to the situation of COVID, some interviews were conducted virtually, and the activity of professors and students in the virtual space was also monitored. Findings: The components of the hidden curriculum of the faculty are: the atmosphere (physical environment, organizational structure, rules and regulations, hospital environment), the interaction between activists, and teaching-learning activities, which ultimately lead to “A disconnection between goals, speech, behavior, and result” had revealed. Conclusion: The mutual effects of the atmosphere and various actors and activities on the process of student development, since the students have the most contact with their peers first, which leads to the most learning, and secondly with the teachers. Clinicians who have close and person-to-person contact with students can have very important effects on students. Students who meet capable and satisfied professors on their way become interested in their field and hope for their future by following the mentor of these professors. On the other hand, weak and dissatisfied professors lead students to feel abandoned, and by forming a colony of peers with different backgrounds, they distort the personality of a group of students and move away from family values, which necessitates a change in some cultural practices at the faculty level.Keywords: hidden curriculum, nursing education, ethnography, nursing
Procedia PDF Downloads 10916981 A Watermarking Signature Scheme with Hidden Watermarks and Constraint Functions in the Symmetric Key Setting
Authors: Yanmin Zhao, Siu Ming Yiu
Abstract:
To claim the ownership for an executable program is a non-trivial task. An emerging direction is to add a watermark to the program such that the watermarked program preserves the original program’s functionality and removing the watermark would heavily destroy the functionality of the watermarked program. In this paper, the first watermarking signature scheme with the watermark and the constraint function hidden in the symmetric key setting is constructed. The scheme uses well-known techniques of lattice trapdoors and a lattice evaluation. The watermarking signature scheme is unforgeable under the Short Integer Solution (SIS) assumption and satisfies other security requirements such as the unremovability security property.Keywords: short integer solution (SIS) problem, symmetric-key setting, watermarking schemes, watermarked signatures
Procedia PDF Downloads 13316980 Dynamic Network Approach to Air Traffic Management
Authors: Catia S. A. Sima, K. Bousson
Abstract:
Congestion in the Terminal Maneuvering Areas (TMAs) of larger airports impacts all aspects of air traffic flow, not only at national level but may also induce arrival delays at international level. Hence, there is a need to monitor appropriately the air traffic flow in TMAs so that efficient decisions may be taken to manage their occupancy rates. It would be desirable to physically increase the existing airspace to accommodate all existing demands, but this question is entirely utopian and, given this possibility, several studies and analyses have been developed over the past decades to meet the challenges that have arisen due to the dizzying expansion of the aeronautical industry. The main objective of the present paper is to propose concepts to manage and reduce the degree of uncertainty in the air traffic operations, maximizing the interest of all involved, ensuring a balance between demand and supply, and developing and/or adapting resources that enable a rapid and effective adaptation of measures to the current context and the consequent changes perceived in the aeronautical industry. A central task is to emphasize the increase in air traffic flow management capacity to the present day, taking into account not only a wide range of methodologies but also equipment and/or tools already available in the aeronautical industry. The efficient use of these resources is crucial as the human capacity for work is limited and the actors involved in all processes related to air traffic flow management are increasingly overloaded and, as a result, operational safety could be compromised. The methodology used to answer and/or develop the issues listed above is based on the advantages promoted by the application of Markov Chain principles that enable the construction of a simplified model of a dynamic network that describes the air traffic flow behavior anticipating their changes and eventual measures that could better address the impact of increased demand. Through this model, the proposed concepts are shown to have potentials to optimize the air traffic flow management combined with the operation of the existing resources at each moment and the circumstances found in each TMA, using historical data from the air traffic operations and specificities found in the aeronautical industry, namely in the Portuguese context.Keywords: air traffic flow, terminal maneuvering area, TMA, air traffic management, ATM, Markov chains
Procedia PDF Downloads 13316979 Estimating Anthropometric Dimensions for Saudi Males Using Artificial Neural Networks
Authors: Waleed Basuliman
Abstract:
Anthropometric dimensions are considered one of the important factors when designing human-machine systems. In this study, the estimation of anthropometric dimensions has been improved by using Artificial Neural Network (ANN) model that is able to predict the anthropometric measurements of Saudi males in Riyadh City. A total of 1427 Saudi males aged 6 to 60 years participated in measuring 20 anthropometric dimensions. These anthropometric measurements are considered important for designing the work and life applications in Saudi Arabia. The data were collected during eight months from different locations in Riyadh City. Five of these dimensions were used as predictors variables (inputs) of the model, and the remaining 15 dimensions were set to be the measured variables (Model’s outcomes). The hidden layers varied during the structuring stage, and the best performance was achieved with the network structure 6-25-15. The results showed that the developed Neural Network model was able to estimate the body dimensions of Saudi male population in Riyadh City. The network's mean absolute percentage error (MAPE) and the root mean squared error (RMSE) were found to be 0.0348 and 3.225, respectively. These results were found less, and then better, than the errors found in the literature. Finally, the accuracy of the developed neural network was evaluated by comparing the predicted outcomes with regression model. The ANN model showed higher coefficient of determination (R2) between the predicted and actual dimensions than the regression model.Keywords: artificial neural network, anthropometric measurements, back-propagation
Procedia PDF Downloads 48716978 Idea Expropriation, Incentives, and Governance within Organizations
Authors: Gulseren Mutlu, Gurupdesh Pandher
Abstract:
This paper studies the strategic interplay between innovation, incentives, expropriation threat and disputes arising from expropriation from an intra-organization perspective. We present a simple principal-agent model with hidden actions and hidden information in which two employees can choose how much (innovative) effort to exert, whether to expropriate the innovation of the other employee and whether to dispute if innovation is expropriated. The organization maximizes its expected payoff by choosing the optimal reward scheme for both employees as well as whether to encourage or discourage disputes. We analyze two mechanisms under which innovative ideas are not expropriated. First, we show that under a non-contestable mechanism (in which the organization discourages disputes among employees), the organization has to offer a “rent” to the potential expropriator. However, under a contestable mechanism (in which the organization encourages disputes), there is no need for such rent. If the cost of resolving the dispute is negligible, the organization’s expected payoff is higher under a contestable mechanism. Second, we develop a comparable team mechanism in which innovation takes place as a result of the joint efforts of employees and innovation payments are made based on the team outcome. We show that if the innovation value is low and employees have similar productivity, then the organization is better off under a contestable mechanism. On the other hand, if the innovation value is high, the organization is better off under a team mechanism. Our results have important practical implications for the design of innovation reward system for employees, hiring policy and governance for different companies.Keywords: innovation, incentives, expropriation threat, dispute resolution
Procedia PDF Downloads 61716977 Real-Time Gesture Recognition System Using Microsoft Kinect
Authors: Ankita Wadhawan, Parteek Kumar, Umesh Kumar
Abstract:
Gesture is any body movement that expresses some attitude or any sentiment. Gestures as a sign language are used by deaf people for conveying messages which helps in eliminating the communication barrier between deaf people and normal persons. Nowadays, everybody is using mobile phone and computer as a very important gadget in their life. But there are some physically challenged people who are blind/deaf and the use of mobile phone or computer like device is very difficult for them. So, there is an immense need of a system which works on body gesture or sign language as input. In this research, Microsoft Kinect Sensor, SDK V2 and Hidden Markov Toolkit (HTK) are used to recognize the object, motion of object and human body joints through Touch less NUI (Natural User Interface) in real-time. The depth data collected from Microsoft Kinect has been used to recognize gestures of Indian Sign Language (ISL). The recorded clips are analyzed using depth, IR and skeletal data at different angles and positions. The proposed system has an average accuracy of 85%. The developed Touch less NUI provides an interface to recognize gestures and controls the cursor and click operation in computer just by waving hand gesture. This research will help deaf people to make use of mobile phones, computers and socialize among other persons in the society.Keywords: gesture recognition, Indian sign language, Microsoft Kinect, natural user interface, sign language
Procedia PDF Downloads 30616976 Deterioration Prediction of Pavement Load Bearing Capacity from FWD Data
Authors: Kotaro Sasai, Daijiro Mizutani, Kiyoyuki Kaito
Abstract:
Expressways in Japan have been built in an accelerating manner since the 1960s with the aid of rapid economic growth. About 40 percent in length of expressways in Japan is now 30 years and older and has become superannuated. Time-related deterioration has therefore reached to a degree that administrators, from a standpoint of operation and maintenance, are forced to take prompt measures on a large scale aiming at repairing inner damage deep in pavements. These measures have already been performed for bridge management in Japan and are also expected to be embodied for pavement management. Thus, planning methods for the measures are increasingly demanded. Deterioration of layers around road surface such as surface course and binder course is brought about at the early stages of whole pavement deterioration process, around 10 to 30 years after construction. These layers have been repaired primarily because inner damage usually becomes significant after outer damage, and because surveys for measuring inner damage such as Falling Weight Deflectometer (FWD) survey and open-cut survey are costly and time-consuming process, which has made it difficult for administrators to focus on inner damage as much as they have been supposed to. As expressways today have serious time-related deterioration within them deriving from the long time span since they started to be used, it is obvious the idea of repairing layers deep in pavements such as base course and subgrade must be taken into consideration when planning maintenance on a large scale. This sort of maintenance requires precisely predicting degrees of deterioration as well as grasping the present situations of pavements. Methods for predicting deterioration are determined to be either mechanical or statistical. While few mechanical models have been presented, as far as the authors know of, previous studies have presented statistical methods for predicting deterioration in pavements. One describes deterioration process by estimating Markov deterioration hazard model, while another study illustrates it by estimating Proportional deterioration hazard model. Both of the studies analyze deflection data obtained from FWD surveys and present statistical methods for predicting deterioration process of layers around road surface. However, layers of base course and subgrade remain unanalyzed. In this study, data collected from FWD surveys are analyzed to predict deterioration process of layers deep in pavements in addition to surface layers by a means of estimating a deterioration hazard model using continuous indexes. This model can prevent the loss of information of data when setting rating categories in Markov deterioration hazard model when evaluating degrees of deterioration in roadbeds and subgrades. As a result of portraying continuous indexes, the model can predict deterioration in each layer of pavements and evaluate it quantitatively. Additionally, as the model can also depict probability distribution of the indexes at an arbitrary point and establish a risk control level arbitrarily, it is expected that this study will provide knowledge like life cycle cost and informative content during decision making process referring to where to do maintenance on as well as when.Keywords: deterioration hazard model, falling weight deflectometer, inner damage, load bearing capacity, pavement
Procedia PDF Downloads 39016975 Pricing European Continuous-Installment Options under Regime-Switching Models
Authors: Saghar Heidari
Abstract:
In this paper, we study the valuation problem of European continuous-installment options under Markov-modulated models with a partial differential equation approach. Due to the opportunity for continuing or stopping to pay installments, the valuation problem under regime-switching models can be formulated as coupled partial differential equations (CPDE) with free boundary features. To value the installment options, we express the truncated CPDE as a linear complementarity problem (LCP), then a finite element method is proposed to solve the resulted variational inequality. Under some appropriate assumptions, we establish the stability of the method and illustrate some numerical results to examine the rate of convergence and accuracy of the proposed method for the pricing problem under the regime-switching model.Keywords: continuous-installment option, European option, regime-switching model, finite element method
Procedia PDF Downloads 13716974 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling
Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed
Abstract:
The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.Keywords: streamflow, neural network, optimisation, algorithm
Procedia PDF Downloads 15216973 Developing a DNN Model for the Production of Biogas From a Hybrid BO-TPE System in an Anaerobic Wastewater Treatment Plant
Authors: Hadjer Sadoune, Liza Lamini, Scherazade Krim, Amel Djouadi, Rachida Rihani
Abstract:
Deep neural networks are highly regarded for their accuracy in predicting intricate fermentation processes. Their ability to learn from a large amount of datasets through artificial intelligence makes them particularly effective models. The primary obstacle in improving the performance of these models is to carefully choose the suitable hyperparameters, including the neural network architecture (number of hidden layers and hidden units), activation function, optimizer, learning rate, and other relevant factors. This study predicts biogas production from real wastewater treatment plant data using a sophisticated approach: hybrid Bayesian optimization with a tree-structured Parzen estimator (BO-TPE) for an optimised deep neural network (DNN) model. The plant utilizes an Upflow Anaerobic Sludge Blanket (UASB) digester that treats industrial wastewater from soft drinks and breweries. The digester has a working volume of 1574 m3 and a total volume of 1914 m3. Its internal diameter and height were 19 and 7.14 m, respectively. The data preprocessing was conducted with meticulous attention to preserving data quality while avoiding data reduction. Three normalization techniques were applied to the pre-processed data (MinMaxScaler, RobustScaler and StandardScaler) and compared with the Non-Normalized data. The RobustScaler approach has strong predictive ability for estimating the volume of biogas produced. The highest predicted biogas volume was 2236.105 Nm³/d, with coefficient of determination (R2), mean absolute error (MAE), and root mean square error (RMSE) values of 0.712, 164.610, and 223.429, respectively.Keywords: anaerobic digestion, biogas production, deep neural network, hybrid bo-tpe, hyperparameters tuning
Procedia PDF Downloads 3816972 Node Insertion in Coalescence Hidden-Variable Fractal Interpolation Surface
Authors: Srijanani Anurag Prasad
Abstract:
The Coalescence Hidden-variable Fractal Interpolation Surface (CHFIS) was built by combining interpolation data from the Iterated Function System (IFS). The interpolation data in a CHFIS comprises a row and/or column of uncertain values when a single point is entered. Alternatively, a row and/or column of additional points are placed in the given interpolation data to demonstrate the node added CHFIS. There are three techniques for inserting new points that correspond to the row and/or column of nodes inserted, and each method is further classified into four types based on the values of the inserted nodes. As a result, numerous forms of node insertion can be found in a CHFIS.Keywords: fractal, interpolation, iterated function system, coalescence, node insertion, knot insertion
Procedia PDF Downloads 10016971 Classification of Barley Varieties by Artificial Neural Networks
Authors: Alper Taner, Yesim Benal Oztekin, Huseyin Duran
Abstract:
In this study, an Artificial Neural Network (ANN) was developed in order to classify barley varieties. For this purpose, physical properties of barley varieties were determined and ANN techniques were used. The physical properties of 8 barley varieties grown in Turkey, namely thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain, were determined and it was found that these properties were statistically significant with respect to varieties. As ANN model, three models, N-l, N-2 and N-3 were constructed. The performances of these models were compared. It was determined that the best-fit model was N-1. In the N-1 model, the structure of the model was designed to be 11 input layers, 2 hidden layers and 1 output layer. Thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain were used as input parameter; and varieties as output parameter. R2, Root Mean Square Error and Mean Error for the N-l model were found as 99.99%, 0.00074 and 0.009%, respectively. All results obtained by the N-l model were observed to have been quite consistent with real data. By this model, it would be possible to construct automation systems for classification and cleaning in flourmills.Keywords: physical properties, artificial neural networks, barley, classification
Procedia PDF Downloads 17816970 Cost-Effectiveness of a Certified Service or Hearing Dog Compared to a Regular Companion Dog
Authors: Lundqvist M., Alwin J., Levin L-A.
Abstract:
Background: Assistance dogs are dogs trained to assist persons with functional impairment or chronic diseases. The assistance dog concept includes different types: guide dogs, hearing dogs, and service dogs. The service dog can further be divided into subgroups of physical services dogs, diabetes alert dogs, and seizure alert dogs. To examine the long-term effects of health care interventions, both in terms of resource use and health outcomes, cost-effectiveness analyses can be conducted. This analysis can provide important input to decision-makers when setting priorities. Little is known when it comes to the cost-effectiveness of assistance dogs. The study aimed to assess the cost-effectiveness of certified service or hearing dogs in comparison to regular companion dogs. Methods: The main data source for the analysis was the “service and hearing dog project”. It was a longitudinal interventional study with a pre-post design that incorporated fifty-five owners and their dogs. Data on all relevant costs affected by the use of a service dog such as; municipal services, health care costs, costs of sick leave, and costs of informal care were collected. Health-related quality of life was measured with the standardized instrument EQ-5D-3L. A decision-analytic Markov model was constructed to conduct the cost-effectiveness analysis. Outcomes were estimated over a 10-year time horizon. The incremental cost-effectiveness ratio expressed as cost per gained quality-adjusted life year was the primary outcome. The analysis employed a societal perspective. Results: The result of the cost-effectiveness analysis showed that compared to a regular companion dog, a certified dog is cost-effective with both lower total costs [-32,000 USD] and more quality-adjusted life-years [0.17]. Also, we will present subgroup results analyzing the cost-effectiveness of physicals service dogs and diabetes alert dogs. Conclusions: The study shows that a certified dog is cost-effective in comparison with a regular companion dog for individuals with functional impairments or chronic diseases. Analyses of uncertainty imply that further studies are needed.Keywords: service dogs, hearing dogs, health economics, Markov model, quality-adjusted, life years
Procedia PDF Downloads 15116969 A Survey of Feature-Based Steganalysis for JPEG Images
Authors: Syeda Mainaaz Unnisa, Deepa Suresh
Abstract:
Due to the increase in usage of public domain channels, such as the internet, and communication technology, there is a concern about the protection of intellectual property and security threats. This interest has led to growth in researching and implementing techniques for information hiding. Steganography is the art and science of hiding information in a private manner such that its existence cannot be recognized. Communication using steganographic techniques makes not only the secret message but also the presence of hidden communication, invisible. Steganalysis is the art of detecting the presence of this hidden communication. Parallel to steganography, steganalysis is also gaining prominence, since the detection of hidden messages can prevent catastrophic security incidents from occurring. Steganalysis can also be incredibly helpful in identifying and revealing holes with the current steganographic techniques, which makes them vulnerable to attacks. Through the formulation of new effective steganalysis methods, further research to improve the resistance of tested steganography techniques can be developed. Feature-based steganalysis method for JPEG images calculates the features of an image using the L1 norm of the difference between a stego image and the calibrated version of the image. This calibration can help retrieve some of the parameters of the cover image, revealing the variations between the cover and stego image and enabling a more accurate detection. Applying this method to various steganographic schemes, experimental results were compared and evaluated to derive conclusions and principles for more protected JPEG steganography.Keywords: cover image, feature-based steganalysis, information hiding, steganalysis, steganography
Procedia PDF Downloads 21616968 A Flexible Bayesian State-Space Modelling for Population Dynamics of Wildlife and Livestock Populations
Authors: Sabyasachi Mukhopadhyay, Joseph Ogutu, Hans-Peter Piepho
Abstract:
We aim to model dynamics of wildlife or pastoral livestock population for understanding of their population change and hence for wildlife conservation and promoting human welfare. The study is motivated by an age-sex structured population counts in different regions of Serengeti-Mara during the period 1989-2003. Developing reliable and realistic models for population dynamics of large herbivore population can be a very complex and challenging exercise. However, the Bayesian statistical domain offers some flexible computational methods that enable the development and efficient implementation of complex population dynamics models. In this work, we have used a novel Bayesian state-space model to analyse the dynamics of topi and hartebeest populations in the Serengeti-Mara Ecosystem of East Africa. The state-space model involves survival probabilities of the animals which further depend on various factors like monthly rainfall, size of habitat, etc. that cause recent declines in numbers of the herbivore populations and potentially threaten their future population viability in the ecosystem. Our study shows that seasonal rainfall is the most important factors shaping the population size of animals and indicates the age-class which most severely affected by any change in weather conditions.Keywords: bayesian state-space model, Markov Chain Monte Carlo, population dynamics, conservation
Procedia PDF Downloads 20816967 Probabilistic Modeling Laser Transmitter
Authors: H. S. Kang
Abstract:
Coupled electrical and optical model for conversion of electrical energy into coherent optical energy for transmitter-receiver link by solid state device is presented. Probability distribution for travelling laser beam switching time intervals and the number of switchings in the time interval is obtained. Selector function mapping is employed to regulate optical data transmission speed. It is established that regulated laser transmission from PhotoActive Laser transmitter follows principal of invariance. This considerably simplifies design of PhotoActive Laser Transmission networks.Keywords: computational mathematics, finite difference Markov chain methods, sequence spaces, singularly perturbed differential equations
Procedia PDF Downloads 43116966 Optimization Based Extreme Learning Machine for Watermarking of an Image in DWT Domain
Authors: RAM PAL SINGH, VIKASH CHAUDHARY, MONIKA VERMA
Abstract:
In this paper, we proposed the implementation of optimization based Extreme Learning Machine (ELM) for watermarking of B-channel of color image in discrete wavelet transform (DWT) domain. ELM, a regularization algorithm, works based on generalized single-hidden-layer feed-forward neural networks (SLFNs). However, hidden layer parameters, generally called feature mapping in context of ELM need not to be tuned every time. This paper shows the embedding and extraction processes of watermark with the help of ELM and results are compared with already used machine learning models for watermarking.Here, a cover image is divide into suitable numbers of non-overlapping blocks of required size and DWT is applied to each block to be transformed in low frequency sub-band domain. Basically, ELM gives a unified leaning platform with a feature mapping, that is, mapping between hidden layer and output layer of SLFNs, is tried for watermark embedding and extraction purpose in a cover image. Although ELM has widespread application right from binary classification, multiclass classification to regression and function estimation etc. Unlike SVM based algorithm which achieve suboptimal solution with high computational complexity, ELM can provide better generalization performance results with very small complexity. Efficacy of optimization method based ELM algorithm is measured by using quantitative and qualitative parameters on a watermarked image even though image is subjected to different types of geometrical and conventional attacks.Keywords: BER, DWT, extreme leaning machine (ELM), PSNR
Procedia PDF Downloads 31116965 Cooperative Coevolution for Neuro-Evolution of Feed Forward Networks for Time Series Prediction Using Hidden Neuron Connections
Authors: Ravneil Nand
Abstract:
Cooperative coevolution uses problem decomposition methods to solve a larger problem. The problem decomposition deals with breaking down the larger problem into a number of smaller sub-problems depending on their method. Different problem decomposition methods have their own strengths and limitations depending on the neural network used and application problem. In this paper we are introducing a new problem decomposition method known as Hidden-Neuron Level Decomposition (HNL). The HNL method is competing with established problem decomposition method in time series prediction. The results show that the proposed approach has improved the results in some benchmark data sets when compared to the standalone method and has competitive results when compared to methods from literature.Keywords: cooperative coevaluation, feed forward network, problem decomposition, neuron, synapse
Procedia PDF Downloads 33516964 An Exploratory Study on Experiences of Menarche and Menstruation among Adolescent Girls
Authors: Bhawna Devi, Girishwar Misra, Rajni Sahni
Abstract:
Menarche and menstruation is a nearly universal experience in adolescent girls’ lives, yet based on several observations it has been found that it is rarely explicitly talked about, and remains poorly understood. By menarche, girls are likely to have been influenced not only by cultural stereotypes about menstruation, but also by information acquired through significant others. Their own expectations about menstruation are likely to influence their reports of menarcheal experience. The aim of this study is to examine how girls construct meaning around menarche and menstruation in social interactions and specific contexts along with conceptualized experiences which is ‘owned’ by individual girls. Twenty adolescent girls from New Delhi (India), between the ages of 12 to 19 years (mean age = 15.1) participated in the study. Semi-structured interviews were conducted to capture the nuances of menarche and menstrual experiences of these twenty adolescent girls. Thematic analysis was used to analyze the data. From the detailed analysis of transcribed data main themes that emerged were- Menarche: A Trammeled Sky to Fly, Menarche as Flashbulb Memory, Hidden Secret: Shame and Fear, Hallmark of Womanhood, Menarche as Illness. Therefore, the finding unfolds that menarche and menstruation were largely constructed as embarrassing, shameful and something to be hidden, specifically within the school context and in general when they are outside of their home. Menstruation was also constructed as illness that programmed ‘feeling of weaknesses’ into them. The production and perpetuation of gender-related difference narratives was also evident. Implications for individuals, as well as for the subjugation of girls and women, are discussed, and it is argued that current negative representations of, and practices in relation to, menarche and menstruation need to be challenged.Keywords: embarrassment, gender-related difference, hidden secret, illness, menarche and menstruation
Procedia PDF Downloads 14316963 Robust Image Design Based Steganographic System
Authors: Sadiq J. Abou-Loukh, Hanan M. Habbi
Abstract:
This paper presents a steganography to hide the transmitted information without excite suspicious and also illustrates the level of secrecy that can be increased by using cryptography techniques. The proposed system has been implemented firstly by encrypted image file one time pad key and secondly encrypted message that hidden to perform encryption followed by image embedding. Then the new image file will be created from the original image by using four triangles operation, the new image is processed by one of two image processing techniques. The proposed two processing techniques are thresholding and differential predictive coding (DPC). Afterwards, encryption or decryption keys are generated by functional key generator. The generator key is used one time only. Encrypted text will be hidden in the places that are not used for image processing and key generation system has high embedding rate (0.1875 character/pixel) for true color image (24 bit depth).Keywords: encryption, thresholding, differential predictive coding, four triangles operation
Procedia PDF Downloads 49316962 Subspace Rotation Algorithm for Implementing Restricted Hopfield Network as an Auto-Associative Memory
Authors: Ci Lin, Tet Yeap, Iluju Kiringa
Abstract:
This paper introduces the subspace rotation algorithm (SRA) to train the Restricted Hopfield Network (RHN) as an auto-associative memory. Subspace rotation algorithm is a gradient-free subspace tracking approach based on the singular value decomposition (SVD). In comparison with Backpropagation Through Time (BPTT) on training RHN, it is observed that SRA could always converge to the optimal solution and BPTT could not achieve the same performance when the model becomes complex, and the number of patterns is large. The AUTS case study showed that the RHN model trained by SRA could achieve a better structure of attraction basin with larger radius(in general) than the Hopfield Network(HNN) model trained by Hebbian learning rule. Through learning 10000 patterns from MNIST dataset with RHN models with different number of hidden nodes, it is observed that an several components could be adjusted to achieve a balance between recovery accuracy and noise resistance.Keywords: hopfield neural network, restricted hopfield network, subspace rotation algorithm, hebbian learning rule
Procedia PDF Downloads 11716961 On Stochastic Models for Fine-Scale Rainfall Based on Doubly Stochastic Poisson Processes
Authors: Nadarajah I. Ramesh
Abstract:
Much of the research on stochastic point process models for rainfall has focused on Poisson cluster models constructed from either the Neyman-Scott or Bartlett-Lewis processes. The doubly stochastic Poisson process provides a rich class of point process models, especially for fine-scale rainfall modelling. This paper provides an account of recent development on this topic and presents the results based on some of the fine-scale rainfall models constructed from this class of stochastic point processes. Amongst the literature on stochastic models for rainfall, greater emphasis has been placed on modelling rainfall data recorded at hourly or daily aggregation levels. Stochastic models for sub-hourly rainfall are equally important, as there is a need to reproduce rainfall time series at fine temporal resolutions in some hydrological applications. For example, the study of climate change impacts on hydrology and water management initiatives requires the availability of data at fine temporal resolutions. One approach to generating such rainfall data relies on the combination of an hourly stochastic rainfall simulator, together with a disaggregator making use of downscaling techniques. Recent work on this topic adopted a different approach by developing specialist stochastic point process models for fine-scale rainfall aimed at generating synthetic precipitation time series directly from the proposed stochastic model. One strand of this approach focused on developing a class of doubly stochastic Poisson process (DSPP) models for fine-scale rainfall to analyse data collected in the form of rainfall bucket tip time series. In this context, the arrival pattern of rain gauge bucket tip times N(t) is viewed as a DSPP whose rate of occurrence varies according to an unobserved finite state irreducible Markov process X(t). Since the likelihood function of this process can be obtained, by conditioning on the underlying Markov process X(t), the models were fitted with maximum likelihood methods. The proposed models were applied directly to the raw data collected by tipping-bucket rain gauges, thus avoiding the need to convert tip-times to rainfall depths prior to fitting the models. One advantage of this approach was that the use of maximum likelihood methods enables a more straightforward estimation of parameter uncertainty and comparison of sub-models of interest. Another strand of this approach employed the DSPP model for the arrivals of rain cells and attached a pulse or a cluster of pulses to each rain cell. Different mechanisms for the pattern of the pulse process were used to construct variants of this model. We present the results of these models when they were fitted to hourly and sub-hourly rainfall data. The results of our analysis suggest that the proposed class of stochastic models is capable of reproducing the fine-scale structure of the rainfall process, and hence provides a useful tool in hydrological modelling.Keywords: fine-scale rainfall, maximum likelihood, point process, stochastic model
Procedia PDF Downloads 278