Search results for: Random Anisotropy Ising Model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17777

Search results for: Random Anisotropy Ising Model

17477 Application of Data Mining Techniques for Tourism Knowledge Discovery

Authors: Teklu Urgessa, Wookjae Maeng, Joong Seek Lee

Abstract:

Application of five implementations of three data mining classification techniques was experimented for extracting important insights from tourism data. The aim was to find out the best performing algorithm among the compared ones for tourism knowledge discovery. Knowledge discovery process from data was used as a process model. 10-fold cross validation method is used for testing purpose. Various data preprocessing activities were performed to get the final dataset for model building. Classification models of the selected algorithms were built with different scenarios on the preprocessed dataset. The outperformed algorithm tourism dataset was Random Forest (76%) before applying information gain based attribute selection and J48 (C4.5) (75%) after selection of top relevant attributes to the class (target) attribute. In terms of time for model building, attribute selection improves the efficiency of all algorithms. Artificial Neural Network (multilayer perceptron) showed the highest improvement (90%). The rules extracted from the decision tree model are presented, which showed intricate, non-trivial knowledge/insight that would otherwise not be discovered by simple statistical analysis with mediocre accuracy of the machine using classification algorithms.

Keywords: classification algorithms, data mining, knowledge discovery, tourism

Procedia PDF Downloads 269
17476 Modeling Influence on Petty Corruption Attitudes

Authors: Nina Bijedic, Drazena Gaspar, Mirsad Hadzikadic

Abstract:

Corruption is an influential and widespread problem. One part of it is so-called petty corruption, related to large-scale bribe giving by ordinary citizens trying to influence the works of public administration or public services. As it is with all means of corruption, petty corruption is related to the level of democracy (or administration efficiency) in a society. The developed model captures some of the factors related to corruptive behavior, as well as people’s attitude towards petty corruption. It has four basic elements: user’s perception of corruption in the society of interest, the influence of social interactions, the influence of penalizing mechanism, and influence of campaigns against petty corruption. The model is agent-based, developed in NetLogo, with a lot of random settings that provide a wider scope of responses. Interactions of different settings for variables of elements provide insight into the influence of each element on attitude towards petty corruption, as well as petty corruptive behavior.

Keywords: agent-based model, attitude, influence, petty corruption, society

Procedia PDF Downloads 174
17475 Kinetic Study of Municipal Plastic Waste

Authors: Laura Salvia Diaz Silvarrey, Anh Phan

Abstract:

Municipal Plastic Waste (MPW) comprises a mixture of thermoplastics such as high and low density polyethylene (HDPE and LDPE), polypropylene (PP), polystyrene (PS) and polyethylene terephthalate (PET). Recycling rate of these plastics is low, e.g. only 27% in 2013. The remains were incinerated or disposed in landfills. As MPW generation increases approximately 5% per annum, MPW management technologies have to be developed to comply with legislation . Pyrolysis, thermochemical decomposition, provides an excellent alternative to convert MPW into valuable resources like fuels and chemicals. Most studies on waste plastic kinetics only focused on HDPE and LDPE with a simple assumption of first order decomposition, which is not the real reaction mechanism. The aim of this study was to develop a kinetic study for each of the polymers in the MPW mixture using thermogravimetric analysis (TGA) over a range of heating rates (5, 10, 20 and 40°C/min) in N2 atmosphere and sample size of 1 – 4mm. A model-free kinetic method was applied to quantify the activation energy at each level of conversion. Kissinger–Akahira–Sunose (KAS) and Flynn–Wall–Ozawa (FWO) equations jointly with Master Plots confirmed that the activation energy was not constant along all the reaction for all the five plastic studied, showing that MPW decomposed through a complex mechanism and not by first-order kinetics. Master plots confirmed that MPW decomposed following a random scission mechanism at conversions above 40%. According to the random scission mechanism, different radicals are formed along the backbone producing the cleavage of bonds by chain scission into molecules of different lengths. The cleavage of bonds during random scission follows first-order kinetics and it is related with the conversion. When a bond is broken one part of the initial molecule becomes an unsaturated one and the other a terminal free radical. The latter can react with hydrogen from and adjacent carbon releasing another free radical and a saturated molecule or reacting with another free radical and forming an alkane. Not every time a bonds is broken a molecule is evaporated. At early stages of the reaction (conversion and temperature below 40% and 300°C), most products are not short enough to evaporate. Only at higher degrees of conversion most of cleavage of bonds releases molecules small enough to evaporate.

Keywords: kinetic, municipal plastic waste, pyrolysis, random scission

Procedia PDF Downloads 328
17474 Machine Learning for Disease Prediction Using Symptoms and X-Ray Images

Authors: Ravija Gunawardana, Banuka Athuraliya

Abstract:

Machine learning has emerged as a powerful tool for disease diagnosis and prediction. The use of machine learning algorithms has the potential to improve the accuracy of disease prediction, thereby enabling medical professionals to provide more effective and personalized treatments. This study focuses on developing a machine-learning model for disease prediction using symptoms and X-ray images. The importance of this study lies in its potential to assist medical professionals in accurately diagnosing diseases, thereby improving patient outcomes. Respiratory diseases are a significant cause of morbidity and mortality worldwide, and chest X-rays are commonly used in the diagnosis of these diseases. However, accurately interpreting X-ray images requires significant expertise and can be time-consuming, making it difficult to diagnose respiratory diseases in a timely manner. By incorporating machine learning algorithms, we can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The study utilized the Mask R-CNN algorithm, which is a state-of-the-art method for object detection and segmentation in images, to process chest X-ray images. The model was trained and tested on a large dataset of patient information, which included both symptom data and X-ray images. The performance of the model was evaluated using a range of metrics, including accuracy, precision, recall, and F1-score. The results showed that the model achieved an accuracy rate of over 90%, indicating that it was able to accurately detect and segment regions of interest in the X-ray images. In addition to X-ray images, the study also incorporated symptoms as input data for disease prediction. The study used three different classifiers, namely Random Forest, K-Nearest Neighbor and Support Vector Machine, to predict diseases based on symptoms. These classifiers were trained and tested using the same dataset of patient information as the X-ray model. The results showed promising accuracy rates for predicting diseases using symptoms, with the ensemble learning techniques significantly improving the accuracy of disease prediction. The study's findings indicate that the use of machine learning algorithms can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The model developed in this study has the potential to assist medical professionals in diagnosing respiratory diseases more accurately and efficiently. However, it is important to note that the accuracy of the model can be affected by several factors, including the quality of the X-ray images, the size of the dataset used for training, and the complexity of the disease being diagnosed. In conclusion, the study demonstrated the potential of machine learning algorithms for disease prediction using symptoms and X-ray images. The use of these algorithms can improve the accuracy of disease diagnosis, ultimately leading to better patient care. Further research is needed to validate the model's accuracy and effectiveness in a clinical setting and to expand its application to other diseases.

Keywords: K-nearest neighbor, mask R-CNN, random forest, support vector machine

Procedia PDF Downloads 109
17473 Study of Quantum Lasers of Random Trimer Barrier AlxGa1-xAs Superlattices

Authors: Bentata Samir, Bendahma Fatima

Abstract:

We have numerically studied the random trimer barrier AlxGa1-xAs superlattices (RTBSL). Such systems consist of two different structures randomly distributed along the growth direction, with the additional constraint that the barriers of one kind appear in triply. An explicit formula is given for evaluating the transmission coefficient of superlattices (SL's) in intentional correlated disorder. We have specially investigated the effect of aluminum concentration on the laser wavelength. We discuss the impact of the aluminum concentration associated with the structure profile on the laser wavelengths.

Keywords: superlattices, transfer matrix method, transmission coefficient, quantum laser

Procedia PDF Downloads 454
17472 Mediation Role of Teachers’ Surface Acting and Deep Acting on the Relationship between Calling Orientation and Work Engagement

Authors: Yohannes Bisa Biramo

Abstract:

This study examined the meditational role of surface acting and deep acting on the relationship between calling orientation and work engagement of teachers in secondary schools of Wolaita Zone, Wolaita, Ethiopia. A predictive non-experimental correlational design was performed among 300 secondary school teachers. Stratified random sampling followed by a systematic random sampling technique was used as the basis for selecting samples from the target population. To analyze the data, Structural Equation Modeling (SEM) was used to test the association between the independent variables and the dependent variables. Furthermore, the goodness of fit of the study variables was tested using SEM to see and explain the path influence of the independent variable on the dependent variable. Confirmatory factor analysis (CFA) was conducted to test the validity of the scales in the study and to assess the measurement model fit indices. The analysis result revealed that calling was significantly and positively correlated with surface acting, deep acting and work engagement. Similarly, surface acting was significantly and positively correlated with deep acting and work engagement. And also, deep acting was significantly and positively correlated with work engagement. With respect to mediation analysis, the result revealed that surface acting mediated the relationship between calling and work engagement and also deep acting mediated the relationship between calling and work engagement. Besides, by using the model of the present study, the school leaders and practitioners can identify a core area to be considered in recruiting and letting teachers teach, in giving induction training for newly employed teachers and in performance appraisal.

Keywords: calling, surface acting, deep acting, work engagement, mediation, teachers

Procedia PDF Downloads 55
17471 Bayesian Structural Identification with Systematic Uncertainty Using Multiple Responses

Authors: André Jesus, Yanjie Zhu, Irwanda Laory

Abstract:

Structural health monitoring is one of the most promising technologies concerning aversion of structural risk and economic savings. Analysts often have to deal with a considerable variety of uncertainties that arise during a monitoring process. Namely the widespread application of numerical models (model-based) is accompanied by a widespread concern about quantifying the uncertainties prevailing in their use. Some of these uncertainties are related with the deterministic nature of the model (code uncertainty) others with the variability of its inputs (parameter uncertainty) and the discrepancy between a model/experiment (systematic uncertainty). The actual process always exhibits a random behaviour (observation error) even when conditions are set identically (residual variation). Bayesian inference assumes that parameters of a model are random variables with an associated PDF, which can be inferred from experimental data. However in many Bayesian methods the determination of systematic uncertainty can be problematic. In this work systematic uncertainty is associated with a discrepancy function. The numerical model and discrepancy function are approximated by Gaussian processes (surrogate model). Finally, to avoid the computational burden of a fully Bayesian approach the parameters that characterise the Gaussian processes were estimated in a four stage process (modular Bayesian approach). The proposed methodology has been successfully applied on fields such as geoscience, biomedics, particle physics but never on the SHM context. This approach considerably reduces the computational burden; although the extent of the considered uncertainties is lower (second order effects are neglected). To successfully identify the considered uncertainties this formulation was extended to consider multiple responses. The efficiency of the algorithm has been tested on a small scale aluminium bridge structure, subjected to a thermal expansion due to infrared heaters. Comparison of its performance with responses measured at different points of the structure and associated degrees of identifiability is also carried out. A numerical FEM model of the structure was developed and the stiffness from its supports is considered as a parameter to calibrate. Results show that the modular Bayesian approach performed best when responses of the same type had the lowest spatial correlation. Based on previous literature, using different types of responses (strain, acceleration, and displacement) should also improve the identifiability problem. Uncertainties due to parametric variability, observation error, residual variability, code variability and systematic uncertainty were all recovered. For this example the algorithm performance was stable and considerably quicker than Bayesian methods that account for the full extent of uncertainties. Future research with real-life examples is required to fully access the advantages and limitations of the proposed methodology.

Keywords: bayesian, calibration, numerical model, system identification, systematic uncertainty, Gaussian process

Procedia PDF Downloads 305
17470 Multi-Objective Random Drift Particle Swarm Optimization Algorithm Based on RDPSO and Crowding Distance Sorting

Authors: Yiqiong Yuan, Jun Sun, Dongmei Zhou, Jianan Sun

Abstract:

In this paper, we presented a Multi-Objective Random Drift Particle Swarm Optimization algorithm (MORDPSO-CD) based on RDPSO and crowding distance sorting to improve the convergence and distribution with less computation cost. MORDPSO-CD makes the most of RDPSO to approach the true Pareto optimal solutions fast. We adopt the crowding distance sorting technique to update and maintain the archived optimal solutions. Introducing the crowding distance technique into MORDPSO can make the leader particles find the true Pareto solution ultimately. The simulation results reveal that the proposed algorithm has better convergence and distribution

Keywords: multi-objective optimization, random drift particle swarm optimization, crowding distance sorting, pareto optimal solution

Procedia PDF Downloads 228
17469 Node Optimization in Wireless Sensor Network: An Energy Approach

Authors: Y. B. Kirankumar, J. D. Mallapur

Abstract:

Wireless Sensor Network (WSN) is an emerging technology, which has great invention for various low cost applications both for mass public as well as for defence. The wireless sensor communication technology allows random participation of sensor nodes with particular applications to take part in the network, which results in most of the uncovered simulation area, where fewer nodes are located at far distances. The drawback of such network would be that the additional energy is spent by the nodes located in a pattern of dense location, using more number of nodes for a smaller distance of communication adversely in a region with less number of nodes and additional energy is again spent by the source node in order to transmit a packet to neighbours, thereby transmitting the packet to reach the destination. The proposed work is intended to develop Energy Efficient Node Placement Algorithm (EENPA) in order to place the sensor node efficiently in simulated area, where all the nodes are equally located on a radial path to cover maximum area at equidistance. The total energy consumed by each node compared to random placement of nodes is less by having equal burden on fewer nodes of far location, having distributed the nodes in whole of the simulation area. Calculating the network lifetime also proves to be efficient as compared to random placement of nodes, hence increasing the network lifetime, too. Simulation is been carried out in a qualnet simulator, results are obtained on par with random placement of nodes with EENP algorithm.

Keywords: energy, WSN, wireless sensor network, energy approach

Procedia PDF Downloads 287
17468 Acoustic Induced Vibration Response Analysis of Honeycomb Panel

Authors: Po-Yuan Tung, Jen-Chueh Kuo, Chia-Ray Chen, Chien-Hsing Li, Kuo-Liang Pan

Abstract:

The main-body structure of satellite is mainly constructed by lightweight material, it should be able to withstand certain vibration load during launches. Since various kinds of change possibility in the space, it is an extremely important work to study the random vibration response of satellite structure. This paper based on the reciprocity relationship between sound and structure response and it will try to evaluate the dynamic response of satellite main body under random acoustic load excitation. This paper will study the technical process and verify the feasibility of sonic-borne vibration analysis. One simple plate exposed to the uniform acoustic field is utilized to take some important parameters and to validate the acoustics field model of the reverberation chamber. Then import both structure and acoustic field chamber models into the vibro-acoustic coupling analysis software to predict the structure response. During the modeling process, experiment verification is performed to make sure the quality of numerical models. Finally, the surface vibration level can be calculated through the modal participation factor, and the analysis results are presented in PSD spectrum.

Keywords: vibration, acoustic, modal, honeycomb panel

Procedia PDF Downloads 537
17467 Rural Development through Women Participation in Livestock Care and Management in District Faisalabad

Authors: Arfan Riasat, M. Iqbal Zafar, Gulfam Riasat

Abstract:

Pakistani women actively participate in livestock management activities, along with their normal domestic chores. The study was designed to measure the position and contribution of rural women, their constraints in livestock management activities and mainly how the rural women contribute for development in the district Faisalabad. It was envisioned that women participation in livestock activities have rarely been investigated. A multistage random sampling technique was used to collect the data from Tehsil Summandry of the district selected at random. Two union councils were taken by using simple random sampling technique. Four Chak (village) from each union council were selected at random and fifteen woman were further selected randomly from each selected chak. The results show that a vast majority of women were illiterate, having annual family income of one to two lac. They are living in joint family system. Their main occupation is agriculture and they spend long hours in whole livestock related activities to support their families. A large proportion of the respondents reported that they had to face problems and constraints in livestock activities in the context of decision making, medication, awareness, training along with social and economic issues. Analysis indicated that education level of women, income of household, age were significantly associated with level of participation. Women participation in livestock activities increased production and they were involved in income generating activities for better economic conditions of their families.

Keywords: women, participation, livestock, management, rural development

Procedia PDF Downloads 383
17466 Optimizing a Hybrid Inventory System with Random Demand and Lead Time

Authors: Benga Ebouele, Thomas Tengen

Abstract:

Implementing either periodic or continuous inventory review model within most manufacturing-companies-supply chains as a management tool may incur higher costs. These high costs affect the system flexibility which in turn affects the level of service required to satisfy customers. However, these effects are not clearly understood because the parameters of both inventory review policies (protection demand interval, order quantity, etc.) are not designed to be fully utilized under different and uncertain conditions such as poor manufacturing, supplies and delivery performance. Coming up with a hybrid model which may combine in some sense the feature of both continuous and a periodic inventory review models should be useful. Therefore, there is a need to build and evaluate such hybrid model on the annual total cost, stock out probability and system’s flexibility in order to search for the most cost effective inventory review model. This work also seeks to find the optimal sets of parameters of inventory management under stochastic condition so as to optimise each policy independently. The results reveal that a continuous inventory system always incurs lesser cost than a periodic (R, S) inventory system, but this difference tends to decrease as time goes by. Although the hybrid inventory is the only one that can yield lesser cost over time, it is not always desirable but also natural to use it in order to help the system to meet high performance specification.

Keywords: demand and lead time randomness, hybrid Inventory model, optimization, supply chain

Procedia PDF Downloads 290
17465 Analysis of Overall Thermo-Elastic Properties of Random Particulate Nanocomposites with Various Interphase Models

Authors: Lidiia Nazarenko, Henryk Stolarski, Holm Altenbach

Abstract:

In the paper, a (hierarchical) approach to analysis of thermo-elastic properties of random composites with interphases is outlined and illustrated. It is based on the statistical homogenization method – the method of conditional moments – combined with recently introduced notion of the energy-equivalent inhomogeneity which, in this paper, is extended to include thermal effects. After exposition of the general principles, the approach is applied in the investigation of the effective thermo-elastic properties of a material with randomly distributed nanoparticles. The basic idea of equivalent inhomogeneity is to replace the inhomogeneity and the surrounding it interphase by a single equivalent inhomogeneity of constant stiffness tensor and coefficient of thermal expansion, combining thermal and elastic properties of both. The equivalent inhomogeneity is then perfectly bonded to the matrix which allows to analyze composites with interphases using techniques devised for problems without interphases. From the mechanical viewpoint, definition of the equivalent inhomogeneity is based on Hill’s energy equivalence principle, applied to the problem consisting only of the original inhomogeneity and its interphase. It is more general than the definitions proposed in the past in that, conceptually and practically, it allows to consider inhomogeneities of various shapes and various models of interphases. This is illustrated considering spherical particles with two models of interphases, Gurtin-Murdoch material surface model and spring layer model. The resulting equivalent inhomogeneities are subsequently used to determine effective thermo-elastic properties of randomly distributed particulate composites. The effective stiffness tensor and coefficient of thermal extension of the material with so defined equivalent inhomogeneities are determined by the method of conditional moments. Closed-form expressions for the effective thermo-elastic parameters of a composite consisting of a matrix and randomly distributed spherical inhomogeneities are derived for the bulk and the shear moduli as well as for the coefficient of thermal expansion. Dependence of the effective parameters on the interphase properties is included in the resulting expressions, exhibiting analytically the nature of the size-effects in nanomaterials. As a numerical example, the epoxy matrix with randomly distributed spherical glass particles is investigated. The dependence of the effective bulk and shear moduli, as well as of the effective thermal expansion coefficient on the particle volume fraction (for different radii of nanoparticles) and on the radius of nanoparticle (for fixed volume fraction of nanoparticles) for different interphase models are compared to and discussed in the context of other theoretical predictions. Possible applications of the proposed approach to short-fiber composites with various types of interphases are discussed.

Keywords: effective properties, energy equivalence, Gurtin-Murdoch surface model, interphase, random composites, spherical equivalent inhomogeneity, spring layer model

Procedia PDF Downloads 165
17464 Relevant LMA Features for Human Motion Recognition

Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier

Abstract:

Motion recognition from videos is actually a very complex task due to the high variability of motions. This paper describes the challenges of human motion recognition, especially motion representation step with relevant features. Our descriptor vector is inspired from Laban Movement Analysis method. We propose discriminative features using the Random Forest algorithm in order to remove redundant features and make learning algorithms operate faster and more effectively. We validate our method on MSRC-12 and UTKinect datasets.

Keywords: discriminative LMA features, features reduction, human motion recognition, random forest

Procedia PDF Downloads 163
17463 A Machine Learning Approach for Intelligent Transportation System Management on Urban Roads

Authors: Ashish Dhamaniya, Vineet Jain, Rajesh Chouhan

Abstract:

Traffic management is one of the gigantic issue in most of the urban roads in al-most all metropolitan cities in India. Speed is one of the critical traffic parameters for effective Intelligent Transportation System (ITS) implementation as it decides the arrival rate of vehicles on an intersection which are majorly the point of con-gestions. The study aimed to leverage Machine Learning (ML) models to produce precise predictions of speed on urban roadway links. The research objective was to assess how categorized traffic volume and road width, serving as variables, in-fluence speed prediction. Four tree-based regression models namely: Decision Tree (DT), Random Forest (RF), Extra Tree (ET), and Extreme Gradient Boost (XGB)are employed for this purpose. The models' performances were validated using test data, and the results demonstrate that Random Forest surpasses other machine learning techniques and a conventional utility theory-based model in speed prediction. The study is useful for managing the urban roadway network performance under mixed traffic conditions and effective implementation of ITS.

Keywords: stream speed, urban roads, machine learning, traffic flow

Procedia PDF Downloads 33
17462 Analytical Slope Stability Analysis Based on the Statistical Characterization of Soil Shear Strength

Authors: Bernardo C. P. Albuquerque, Darym J. F. Campos

Abstract:

Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.

Keywords: statistical slope stability analysis, skew distributions, probability of failure, functions of random variables

Procedia PDF Downloads 310
17461 A Comprehensive Analysis of the Phylogenetic Signal in Ramp Sequences in 211 Vertebrates

Authors: Lauren M. McKinnon, Justin B. Miller, Michael F. Whiting, John S. K. Kauwe, Perry G. Ridge

Abstract:

Background: Ramp sequences increase translational speed and accuracy when rare, slowly-translated codons are found at the beginnings of genes. Here, the results of the first analysis of ramp sequences in a phylogenetic construct are presented. Methods: Ramp sequences were compared from 211 vertebrates (110 Mammalian and 101 non-mammalian). The presence and absence of ramp sequences were analyzed as a binary character in a parsimony and maximum likelihood framework. Additionally, ramp sequences were mapped to the Open Tree of Life taxonomy to determine the number of parallelisms and reversals that occurred, and these results were compared to what would be expected due to random chance. Lastly, aligned nucleotides in ramp sequences were compared to the rest of the sequence in order to examine possible differences in phylogenetic signal between these regions of the gene. Results: Parsimony and maximum likelihood analyses of the presence/absence of ramp sequences recovered phylogenies that are highly congruent with established phylogenies. Additionally, the retention index of ramp sequences is significantly higher than would be expected due to random chance (p-value = 0). A chi-square analysis of completely orthologous ramp sequences resulted in a p-value of approximately zero as compared to random chance. Discussion: Ramp sequences recover comparable phylogenies as other phylogenomic methods. Although not all ramp sequences appear to have a phylogenetic signal, more ramp sequences track speciation than expected by random chance. Therefore, ramp sequences may be used in conjunction with other phylogenomic approaches.

Keywords: codon usage bias, phylogenetics, phylogenomics, ramp sequence

Procedia PDF Downloads 130
17460 Distributed Acoustic Sensing Signal Model under Static Fiber Conditions

Authors: G. Punithavathy

Abstract:

The research proposes a statistical model for the distributed acoustic sensor interrogation units that broadcast a laser pulse into the fiber optics, where interactions within the fiber determine the localized acoustic energy that causes light reflections known as backscatter. The backscattered signal's amplitude and phase can be calculated using explicit equations. The created model makes amplitude signal spectrum and autocorrelation predictions that are confirmed by experimental findings. Phase signal characteristics that are useful for researching optical time domain reflectometry (OTDR) system sensing applications are provided and examined, showing good agreement with the experiment. The experiment was successfully done with the use of Python coding. In this research, we can analyze the entire distributed acoustic sensing (DAS) component parts separately. This model assumes that the fiber is in a static condition, meaning that there is no external force or vibration applied to the cable, that means no external acoustic disturbances present. The backscattered signal consists of a random noise component, which is caused by the intrinsic imperfections of the fiber, and a coherent component, which is due to the laser pulse interacting with the fiber.

Keywords: distributed acoustic sensing, optical fiber devices, optical time domain reflectometry, Rayleigh scattering

Procedia PDF Downloads 48
17459 Progressive Type-I Interval Censoring with Binomial Removal-Estimation and Its Properties

Authors: Sonal Budhiraja, Biswabrata Pradhan

Abstract:

This work considers statistical inference based on progressive Type-I interval censored data with random removal. The scheme of progressive Type-I interval censoring with random removal can be described as follows. Suppose n identical items are placed on a test at time T0 = 0 under k pre-fixed inspection times at pre-specified times T1 < T2 < . . . < Tk, where Tk is the scheduled termination time of the experiment. At inspection time Ti, Ri of the remaining surviving units Si, are randomly removed from the experiment. The removal follows a binomial distribution with parameters Si and pi for i = 1, . . . , k, with pk = 1. In this censoring scheme, the number of failures in different inspection intervals and the number of randomly removed items at pre-specified inspection times are observed. Asymptotic properties of the maximum likelihood estimators (MLEs) are established under some regularity conditions. A β-content γ-level tolerance interval (TI) is determined for two parameters Weibull lifetime model using the asymptotic properties of MLEs. The minimum sample size required to achieve the desired β-content γ-level TI is determined. The performance of the MLEs and TI is studied via simulation.

Keywords: asymptotic normality, consistency, regularity conditions, simulation study, tolerance interval

Procedia PDF Downloads 220
17458 A Multi-Criteria Model for Scheduling of Stochastic Single Machine Problem with Outsourcing and Solving It through Application of Chance Constrained

Authors: Homa Ghave, Parmis Shahmaleki

Abstract:

This paper presents a new multi-criteria stochastic mathematical model for a single machine scheduling with outsourcing allowed. There are multiple jobs processing in batch. For each batch, all of job or a quantity of it can be outsourced. The jobs have stochastic processing time and lead time and deterministic due dates arrive randomly. Because of the stochastic inherent of processing time and lead time, we use the chance constrained programming for modeling the problem. First, the problem is formulated in form of stochastic programming and then prepared in a form of deterministic mixed integer linear programming. The objectives are considered in the model to minimize the maximum tardiness and outsourcing cost simultaneously. Several procedures have been developed to deal with the multi-criteria problem. In this paper, we utilize the concept of satisfaction functions to increases the manager’s preference. The proposed approach is tested on instances where the random variables are normally distributed.

Keywords: single machine scheduling, multi-criteria mathematical model, outsourcing strategy, uncertain lead times and processing times, chance constrained programming, satisfaction function

Procedia PDF Downloads 239
17457 E-Consumers’ Attribute Non-Attendance Switching Behavior: Effect of Providing Information on Attributes

Authors: Leonard Maaya, Michel Meulders, Martina Vandebroek

Abstract:

Discrete Choice Experiments (DCE) are used to investigate how product attributes affect decision-makers’ choices. In DCEs, choice situations consisting of several alternatives are presented from which choice-makers select the preferred alternative. Standard multinomial logit models based on random utility theory can be used to estimate the utilities for the attributes. The overarching principle in these models is that respondents understand and use all the attributes when making choices. However, studies suggest that respondents sometimes ignore some attributes (commonly referred to as Attribute Non-Attendance/ANA). The choice modeling literature presents ANA as a static process, i.e., respondents’ ANA behavior does not change throughout the experiment. However, respondents may ignore attributes due to changing factors like availability of information on attributes, learning/fatigue in experiments, etc. We develop a dynamic mixture latent Markov model to model changes in ANA when information on attributes is provided. The model is illustrated on e-consumers’ webshop choices. The results indicate that the dynamic ANA model describes the behavioral changes better than modeling the impact of information using changes in parameters. Further, we find that providing information on attributes leads to an increase in the attendance probabilities for the investigated attributes.

Keywords: choice models, discrete choice experiments, dynamic models, e-commerce, statistical modeling

Procedia PDF Downloads 110
17456 A Comparative Study on Sampling Techniques of Polynomial Regression Model Based Stochastic Free Vibration of Composite Plates

Authors: S. Dey, T. Mukhopadhyay, S. Adhikari

Abstract:

This paper presents an exhaustive comparative investigation on sampling techniques of polynomial regression model based stochastic natural frequency of composite plates. Both individual and combined variations of input parameters are considered to map the computational time and accuracy of each modelling techniques. The finite element formulation of composites is capable to deal with both correlated and uncorrelated random input variables such as fibre parameters and material properties. The results obtained by Polynomial regression (PR) using different sampling techniques are compared. Depending on the suitability of sampling techniques such as 2k Factorial designs, Central composite design, A-Optimal design, I-Optimal, D-Optimal, Taguchi’s orthogonal array design, Box-Behnken design, Latin hypercube sampling, sobol sequence are illustrated. Statistical analysis of the first three natural frequencies is presented to compare the results and its performance.

Keywords: composite plate, natural frequency, polynomial regression model, sampling technique, uncertainty quantification

Procedia PDF Downloads 483
17455 Alcohol-Containing versus Aqueous-Based Solutions for Skin Preparation in Abdominal Surgery: A Systematic Review and Meta-Analysis

Authors: Dimitra V. Peristeri, Hussameldin M. Nour, Amiya Ahsan, Sameh Abogabal, Krishna K. Singh, Muhammad Shafique Sajid

Abstract:

Introduction: The use of optimal skin antiseptic agents for the prevention of surgical site infection (SSI) is of critical importance, especially during abdominal surgical procedures. Alcohol-based chlorhexidine gluconate (CHG) and aqueous-based povidone-iodine (PVI) are the two most common skin antiseptics used nowadays. The objective of this article is to evaluate the effectiveness of alcohol-based CHG versus aqueous-based PVI used for skin preparation before abdominal surgery to reduce SSIs. Methods: Standard medical databases such as MEDLINE, Embase, Pubmed, and Cochrane Library were searched to find randomised, controlled trials (RCTs) comparing alcohol-based CHG skin preparation versus aqueous-based PVI in patients undergoing abdominal surgery. The combined outcomes of SSIs were calculated using an odds ratio (OR) with 95% confidence intervals (95% CI). All data were analysed using Review Manager (RevMan) Software 5.4, and the meta-analysis was performed with a random effect model analysis. Results: A total of 11 studies, all RCTs, were included (n= 12072 participants), recruiting adult patients undergoing abdominal surgery. In the random effect model analysis, the use of alcohol-based CHG in patients undergoing abdominal surgery was associated with a reduced risk of SSI compared to aqueous-based PVI (OR: 0.84; 95% CI [0.74, 0.96], z= 2.61, p= 0.009). Conclusion: Alcohol-based CHG may be more effective for preventing the risk of SSI compared to aqueous-based PVI agents in abdominal surgery. The conclusion of this meta-analysis may add a guiding value to reinforce current clinical practice guidelines.

Keywords: skin preparation, surgical site infection, chlorhexidine, skin antiseptics

Procedia PDF Downloads 74
17454 An Effective Modification to Multiscale Elastic Network Model and Its Evaluation Based on Analyses of Protein Dynamics

Authors: Weikang Gong, Chunhua Li

Abstract:

Dynamics plays an essential role in function exertion of proteins. Elastic network model (ENM), a harmonic potential-based and cost-effective computational method, is a valuable and efficient tool for characterizing the intrinsic dynamical properties encoded in biomacromolecule structures and has been widely used to detect the large-amplitude collective motions of proteins. Gaussian network model (GNM) and anisotropic network model (ANM) are the two often-used ENM models. In recent years, many ENM variants have been proposed. Here, we propose a small but effective modification (denoted as modified mENM) to the multiscale ENM (mENM) where fitting weights of Kirchhoff/Hessian matrixes with the least square method (LSM) is modified since it neglects the details of pairwise interactions. Then we perform its comparisons with the original mENM, traditional ENM, and parameter-free ENM (pfENM) on reproducing dynamical properties for the six representative proteins whose molecular dynamics (MD) trajectories are available in http://mmb.pcb.ub.es/MoDEL/. In the results, for B-factor prediction, mENM achieves the best performance among the four ENM models. Additionally, it is noted that with the weights of the multiscale Kirchhoff/Hessian matrixes modified, interestingly, the modified mGNM/mANM still has a much better performance than the corresponding traditional ENM and pfENM models. As to dynamical cross-correlation map (DCCM) calculation, taking the data obtained from MD trajectories as the standard, mENM performs the worst while the results produced by the modified mENM and pfENM models are close to those from MD trajectories with the latter a little better than the former. Generally, ANMs perform better than the corresponding GNMs except for the mENM. Thus, pfANM and the modified mANM, especially the former, have an excellent performance in dynamical cross-correlation calculation. Compared with GNMs (except for mGNM), the corresponding ANMs can capture quite a number of positive correlations for the residue pairs nearly largest distances apart, which is maybe due to the anisotropy consideration in ANMs. Furtherly, encouragingly the modified mANM displays the best performance in capturing the functional motional modes, followed by pfANM and traditional ANM models, while mANM fails in all the cases. This suggests that the consideration of long-range interactions is critical for ANM models to produce protein functional motions. Based on the analyses, the modified mENM is a promising method in capturing multiple dynamical characteristics encoded in protein structures. This work is helpful for strengthening the understanding of the elastic network model and provides a valuable guide for researchers to utilize the model to explore protein dynamics.

Keywords: elastic network model, ENM, multiscale ENM, molecular dynamics, parameter-free ENM, protein structure

Procedia PDF Downloads 103
17453 An Adaptive Controller Method Based on Full-State Linear Model of Variable Cycle Engine

Authors: Jia Li, Huacong Li, Xiaobao Han

Abstract:

Due to the more variable geometry parameters of VCE (variable cycle aircraft engine), presents an adaptive controller method based on the full-state linear model of VCE and has simulated to solve the multivariate controller design problem of the whole flight envelops. First, analyzes the static and dynamic performances of bypass ratio and other state parameters caused by variable geometric components, and develops nonlinear component model of VCE. Then based on the component model, through small deviation linearization of main fuel (Wf), the area of tail nozzle throat (A8) and the angle of rear bypass ejector (A163), setting up multiple linear model which variable geometric parameters can be inputs. Second, designs the adaptive controllers for VCE linear models of different nominal points. Among them, considering of modeling uncertainties and external disturbances, derives the adaptive law by lyapunov function. The simulation results showed that, the adaptive controller method based on full-state linear model used the angle of rear bypass ejector as input and effectively solved the multivariate control problems of VCE. The performance of all nominal points could track the desired closed-loop reference instructions. The adjust time was less than 1.2s, and the system overshoot was less than 1%, at the same time, the errors of steady states were less than 0.5% and the dynamic tracking errors were less than 1%. In addition, the designed controller could effectively suppress interference and reached the desired commands with different external random noise signals.

Keywords: variable cycle engine (VCE), full-state linear model, adaptive control, by-pass ratio

Procedia PDF Downloads 292
17452 Combining a Continuum of Hidden Regimes and a Heteroskedastic Three-Factor Model in Option Pricing

Authors: Rachid Belhachemi, Pierre Rostan, Alexandra Rostan

Abstract:

This paper develops a discrete-time option pricing model for index options. The model consists of two key ingredients. First, daily stock return innovations are driven by a continuous hidden threshold mixed skew-normal (HTSN) distribution which generates conditional non-normality that is needed to fit daily index return. The most important feature of the HTSN is the inclusion of a latent state variable with a continuum of states, unlike the traditional mixture distributions where the state variable is discrete with little number of states. The HTSN distribution belongs to the class of univariate probability distributions where parameters of the distribution capture the dependence between the variable of interest and the continuous latent state variable (the regime). The distribution has an interpretation in terms of a mixture distribution with time-varying mixing probabilities. It has been shown empirically that this distribution outperforms its main competitor, the mixed normal (MN) distribution, in terms of capturing the stylized facts known for stock returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence. Second, heteroscedasticity in the model is captured by a threeexogenous-factor GARCH model (GARCHX), where the factors are taken from the principal components analysis of various world indices and presents an application to option pricing. The factors of the GARCHX model are extracted from a matrix of world indices applying principal component analysis (PCA). The empirically determined factors are uncorrelated and represent truly different common components driving the returns. Both factors and the eight parameters inherent to the HTSN distribution aim at capturing the impact of the state of the economy on price levels since distribution parameters have economic interpretations in terms of conditional volatilities and correlations of the returns with the hidden continuous state. The PCA identifies statistically independent factors affecting the random evolution of a given pool of assets -in our paper a pool of international stock indices- and sorting them by order of relative importance. The PCA computes a historical cross asset covariance matrix and identifies principal components representing independent factors. In our paper, factors are used to calibrate the HTSN-GARCHX model and are ultimately responsible for the nature of the distribution of random variables being generated. We benchmark our model to the MN-GARCHX model following the same PCA methodology and the standard Black-Scholes model. We show that our model outperforms the benchmark in terms of RMSE in dollar losses for put and call options, which in turn outperforms the analytical Black-Scholes by capturing the stylized facts known for index returns, namely, volatility clustering, leverage effect, skewness, kurtosis and regime dependence.

Keywords: continuous hidden threshold, factor models, GARCHX models, option pricing, risk-premium

Procedia PDF Downloads 279
17451 Test-Retest Agreement, Random Measurement Error and Practice Effect of the Continuous Performance Test-Identical Pairs for Patients with Schizophrenia

Authors: Kuan-Wei Chen, Chien-Wei Chen, Tai-Ling Chang, Nan-Cheng Chen, Ching-Lin Hsieh, Gong-Hong Lin

Abstract:

Background and Purposes: Deficits in sustained attention are common in patients with schizophrenia. Such impairment can limit patients to effectively execute daily activities and affect the efficacy of rehabilitation. The aims of this study were to examine the test-retest agreement, random measurement error, and practice effect of the Continuous Performance Test-Identical Pairs (CPT-IP) (a commonly used sustained attention test) in patients with schizophrenia. The results can provide empirical evidence for clinicians and researchers to apply a sustained attention test with sound psychometric properties in schizophrenia patients. Methods: We recruited patients with chronic schizophrenia to be assessed twice with 1 week interval using CPT-IP. The intra-class correlation coefficient (ICC) was used to examine the test-retest agreement. The percentage of minimal detectable change (MDC%) was used to examine the random measurement error. Moreover, the standardized response mean (SRM) was used to examine the practice effect. Results: A total of 56 patients participated in this study. Our results showed that the ICC was 0.82, MDC% was 47.4%, and SRMs were 0.36 for the CPT-IP. Conclusion: Our results indicate that CPT-IP has acceptable test-retests agreement, substantial random measurement error, and small practice effect in patients with schizophrenia. Therefore, to avoid overestimating patients’ changes in sustained attention, we suggest that clinicians interpret the change scores of CPT-IP conservatively in their routine repeated assessments.

Keywords: schizophrenia, sustained attention, CPT-IP, reliability

Procedia PDF Downloads 272
17450 Deterministic Random Number Generator Algorithm for Cryptosystem Keys

Authors: Adi A. Maaita, Hamza A. A. Al Sewadi

Abstract:

One of the crucial parameters of digital cryptographic systems is the selection of the keys used and their distribution. The randomness of the keys has a strong impact on the system’s security strength being difficult to be predicted, guessed, reproduced or discovered by a cryptanalyst. Therefore, adequate key randomness generation is still sought for the benefit of stronger cryptosystems. This paper suggests an algorithm designed to generate and test pseudo random number sequences intended for cryptographic applications. This algorithm is based on mathematically manipulating a publically agreed upon information between sender and receiver over a public channel. This information is used as a seed for performing some mathematical functions in order to generate a sequence of pseudorandom numbers that will be used for encryption/decryption purposes. This manipulation involves permutations and substitutions that fulfills Shannon’s principle of “confusion and diffusion”. ASCII code characters wereutilized in the generation process instead of using bit strings initially, which adds more flexibility in testing different seed values. Finally, the obtained results would indicate sound difficulty of guessing keys by attackers.

Keywords: cryptosystems, information security agreement, key distribution, random numbers

Procedia PDF Downloads 238
17449 Consumers’ Willingness to Pay for Organic Vegetables in Oyo State

Authors: Olanrewaju Kafayat, O., Salman Kabir, K.

Abstract:

The role of organic agriculture in providing food and income is now gaining wider recognition (Van Elzakker et al 2007). The increasing public concerns about food safety issues on the use of fertilizers, pesticide residues, growth hormones, GM organisms, and increasing awareness of environmental quality issues have led to an expanding demand for environmentally friendly products (Thompson, 1998; Rimal et al., 2005). As a result national governments are concerned about diet and health, and there has been renewed recognition of the role of public policy in promoting healthy diets, thus to provide healthier, safer, more confident citizens (Poole et al., 2007), With these benefits, a study into organic vegetables is very vital to all the major stakeholders. This study analyzed the willingness of consumers to pay for organic vegetables in Oyo state, Nigeria. Primary data was collected with the aid of structured questionnaire administered to 168 respondents. These were selected using multistage random sampling. The first stage involved the selection two (2) ADP zones out of the three (3) ADP zones in Oyo state, The second stage involved the random selection of two (2) local government areas each out of the two (2) ADP zones which are; Ibadan South West and Ogbomoso North and random selection of 4 wards each from the local government areas. The third stage involved random selection of 42 household each from of the local government areas. Descriptive statistics, the principal component analysis, and the logistic regression were used to analyze the data. Results showed 55 percent of the respondents were female while 80 percent were  50 years. 74 percent of the respondents agreed that organic vegetables are of better quality. 31 percent of the respondents were aware of organic vegetables as against 69 percent who were not aware. From the logistic model, educational attainment, amount spent on organic vegetables monthly, better quality of organic vegetables and accessibility to organic vegetables were significant and had a positive relationship on willingness to pay for organic vegetable. The variables that were significant and had a negative relationship with WTP are less attractiveness of organic vegetables and household size of the respondents. This study concludes that consumers with higher level of education were more likely to be aware and willing to pay for organic vegetables than those with low levels of education, the study therefore recommends creation of awareness on the relevance of consuming organic vegetables through effective marketing and educational campaigns.

Keywords: consumers awareness, willingness to pay, organic vegetables, Oyo State

Procedia PDF Downloads 247
17448 Solving Weighted Number of Operation Plus Processing Time Due-Date Assignment, Weighted Scheduling and Process Planning Integration Problem Using Genetic and Simulated Annealing Search Methods

Authors: Halil Ibrahim Demir, Caner Erden, Mumtaz Ipek, Ozer Uygun

Abstract:

Traditionally, the three important manufacturing functions, which are process planning, scheduling and due-date assignment, are performed separately and sequentially. For couple of decades, hundreds of studies are done on integrated process planning and scheduling problems and numerous researches are performed on scheduling with due date assignment problem, but unfortunately the integration of these three important functions are not adequately addressed. Here, the integration of these three important functions is studied by using genetic, random-genetic hybrid, simulated annealing, random-simulated annealing hybrid and random search techniques. As well, the importance of the integration of these three functions and the power of meta-heuristics and of hybrid heuristics are studied.

Keywords: process planning, weighted scheduling, weighted due-date assignment, genetic search, simulated annealing, hybrid meta-heuristics

Procedia PDF Downloads 453