Search results for: random number
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11810

Search results for: random number

11330 Introduction to Paired Domination Polynomial of a Graph

Authors: Puttaswamy, Anwar Alwardi, Nayaka S. R.

Abstract:

One of the algebraic representation of a graph is the graph polynomial. In this article, we introduce the paired-domination polynomial of a graph G. The paired-domination polynomial of a graph G of order n is the polynomial Dp(G, x) with the coefficients dp(G, i) where dp(G, i) denotes the number of paired dominating sets of G of cardinality i and γpd(G) denotes the paired-domination number of G. We obtain some properties of Dp(G, x) and its coefficients. Further, we compute this polynomial for some families of standard graphs. Further, we obtain some characterization for some specific graphs.

Keywords: domination polynomial, paired dominating set, paired domination number, paired domination polynomial

Procedia PDF Downloads 232
11329 Introduction to Transversal Pendant Domination in Graphs

Authors: Nayaka S.R., Putta Swamy, Purushothama S.

Abstract:

Let G=(V, E) be a graph. A dominating set S in G is a pendant dominating set if < S > contains a pendant vertex. A pendant dominating set of G which intersects every minimum pendant dominating set in G is called a transversal pendant dominating set. The minimum cardinality of a transversal pendant dominating set is called the transversal pendant domination number of G, denoted by γ_tp(G). In this paper, we begin to study this parameter. We calculate γ_tp(G) for some families of graphs. Furthermore, some bounds and relations with other domination parameters are obtained for γ_tp(G).

Keywords: dominating set, pendant dominating set, pendant domination number, transversal pendant dominating set, transversal pendant domination number

Procedia PDF Downloads 182
11328 Numerical Study of Heat Transfer in Silica Aerogel

Authors: Amal Maazoun, Abderrazak Mezghani, Ali Ben Moussa

Abstract:

Aerogel consists of a ramified and inter-connected solid skeleton enclosing a very important number of nano-sized pores filled with air that occupies most of the volume and makes very low density. The thermal conductivity of this material can reach lower values than those of any other material, and it changes with the type of the aerogel and its composition. So, in order to explain the causes of the super-insulation of our material and to determine the factors in which depends on its conductivity we used a numerical simulation. We have developed a numerical code that generates random fractal structure of silica aerogel with pre-defined concentration, properties of the backbone and the gas in the pores as well as the size of the particles. The calculation of the conductivity at any point of domain shows that it is not constant and that it depends on the pore size and the location in the pore. A numerical method based on resolution by inversion of block tridiagonal matrices is used to calculate the equivalent thermal conductivity of the whole fractal structure. The average conductivity calculated for each concentration is in good agreement with those of typical aerogels. And we found that the equivalent thermal conductivity of a silica aerogel depends strongly not only on the porosity but also on the tortuosity of the solid backbone.

Keywords: aerogel, fractal structure, numerical study, porous media, thermal conductivity

Procedia PDF Downloads 291
11327 Analysis and Design of Offshore Triceratops under Ultra-Deep Waters

Authors: Srinivasan Chandrasekaran, R. Nagavinothini

Abstract:

Offshore platforms for ultra-deep waters are form-dominant by design; hybrid systems with large flexibility in horizontal plane and high rigidity in vertical plane are preferred due to functional complexities. Offshore triceratops is relatively a new-generation offshore platform, whose deck is partially isolated from the supporting buoyant legs by ball joints. They allow transfer of partial displacements of buoyant legs to the deck but restrain transfer of rotational response. Buoyant legs are in turn taut-moored to the sea bed using pre-tension tethers. Present study will discuss detailed dynamic analysis and preliminary design of the chosen geometric, which is necessary as a proof of validation for such design applications. A detailed numeric analysis of triceratops at 2400 m water depth under random waves is presented. Preliminary design confirms member-level design requirements under various modes of failure. Tether configuration, proposed in the study confirms no pull-out of tethers as stress variation is comparatively lesser than the yield value. Presented study shall aid offshore engineers and contractors to understand suitability of triceratops, in terms of design and dynamic response behaviour.

Keywords: offshore structures, triceratops, random waves, buoyant legs, preliminary design, dynamic analysis

Procedia PDF Downloads 206
11326 Robust Recognition of Locomotion Patterns via Data-Driven Machine Learning in the Cloud Environment

Authors: Shinoy Vengaramkode Bhaskaran, Kaushik Sathupadi, Sandesh Achar

Abstract:

Human locomotion recognition is important in a variety of sectors, such as robotics, security, healthcare, fitness tracking and cloud computing. With the increasing pervasiveness of peripheral devices, particularly Inertial Measurement Units (IMUs) sensors, researchers have attempted to exploit these advancements in order to precisely and efficiently identify and categorize human activities. This research paper introduces a state-of-the-art methodology for the recognition of human locomotion patterns in a cloud environment. The methodology is based on a publicly available benchmark dataset. The investigation implements a denoising and windowing strategy to deal with the unprocessed data. Next, feature extraction is adopted to abstract the main cues from the data. The SelectKBest strategy is used to abstract optimal features from the data. Furthermore, state-of-the-art ML classifiers are used to evaluate the performance of the system, including logistic regression, random forest, gradient boosting and SVM have been investigated to accomplish precise locomotion classification. Finally, a detailed comparative analysis of results is presented to reveal the performance of recognition models.

Keywords: artificial intelligence, cloud computing, IoT, human locomotion, gradient boosting, random forest, neural networks, body-worn sensors

Procedia PDF Downloads 13
11325 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain

Authors: Zachary Blanks, Solomon Sonya

Abstract:

Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.

Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection

Procedia PDF Downloads 294
11324 A Method for solving Legendre's Conjecture

Authors: Hashem Sazegar

Abstract:

Legendre’s conjecture states that there is a prime number between n^2 and (n + 1)^2 for every positive integer n. In this paper we prove that every composite number between n2 and (n + 1)2 can be written u^2 − v^2 or u^2 − v^2 + u − v that u > 0 and v ≥ 0. Using these result as well as induction and residues (modq) we prove Legendre’s conjecture.

Keywords: bertrand-chebyshev theorem, landau’s problems, goldbach’s conjecture, twin prime, ramanujan proof

Procedia PDF Downloads 361
11323 Sinusoidal Roughness Elements in a Square Cavity

Authors: Muhammad Yousaf, Shoaib Usman

Abstract:

Numerical studies were conducted using Lattice Boltzmann Method (LBM) to study the natural convection in a square cavity in the presence of roughness. An algorithm basedon a single relaxation time Bhatnagar-Gross-Krook (BGK) model of Lattice Boltzmann Method (LBM) was developed. Roughness was introduced on both the hot and cold walls in the form of sinusoidal roughness elements. The study was conducted for a Newtonian fluid of Prandtl number (Pr) 1.0. The range of Ra number was explored from 103 to 106 in a laminar region. Thermal and hydrodynamic behavior of fluid was analyzed using a differentially heated square cavity with roughness elements present on both the hot and cold wall. Neumann boundary conditions were introduced on horizontal walls with vertical walls as isothermal. The roughness elements were at the same boundary condition as corresponding walls. Computational algorithm was validated against previous benchmark studies performed with different numerical methods, and a good agreement was found to exist. Results indicate that the maximum reduction in the average heat transfer was16.66 percent at Ra number 105.

Keywords: Lattice Boltzmann method, natural convection, nusselt number, rayleigh number, roughness

Procedia PDF Downloads 528
11322 Incremental Learning of Independent Topic Analysis

Authors: Takahiro Nishigaki, Katsumi Nitta, Takashi Onoda

Abstract:

In this paper, we present a method of applying Independent Topic Analysis (ITA) to increasing the number of document data. The number of document data has been increasing since the spread of the Internet. ITA was presented as one method to analyze the document data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis (ICA). ICA is a technique in the signal processing; however, it is difficult to apply the ITA to increasing number of document data. Because ITA must use the all document data so temporal and spatial cost is very high. Therefore, we present Incremental ITA which extracts the independent topics from increasing number of document data. Incremental ITA is a method of updating the independent topics when the document data is added after extracted the independent topics from a just previous the data. In addition, Incremental ITA updates the independent topics when the document data is added. And we show the result applied Incremental ITA to benchmark datasets.

Keywords: text mining, topic extraction, independent, incremental, independent component analysis

Procedia PDF Downloads 309
11321 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms

Authors: Selim M. Khan

Abstract:

Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.

Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America

Procedia PDF Downloads 98
11320 Frailty Models for Modeling Heterogeneity: Simulation Study and Application to Quebec Pension Plan

Authors: Souad Romdhane, Lotfi Belkacem

Abstract:

When referring to actuarial analysis of lifetime, only models accounting for observable risk factors have been developed. Within this context, Cox proportional hazards model (CPH model) is commonly used to assess the effects of observable covariates as gender, age, smoking habits, on the hazard rates. These covariates may fail to fully account for the true lifetime interval. This may be due to the existence of another random variable (frailty) that is still being ignored. The aim of this paper is to examine the shared frailty issue in the Cox proportional hazard model by including two different parametric forms of frailty into the hazard function. Four estimated methods are used to fit them. The performance of the parameter estimates is assessed and compared between the classical Cox model and these frailty models through a real-life data set from the Quebec Pension Plan and then using a more general simulation study. This performance is investigated in terms of the bias of point estimates and their empirical standard errors in both fixed and random effect parts. Both the simulation and the real dataset studies showed differences between classical Cox model and shared frailty model.

Keywords: life insurance-pension plan, survival analysis, risk factors, cox proportional hazards model, multivariate failure-time data, shared frailty, simulations study

Procedia PDF Downloads 359
11319 Prevalence and Influencing Factors of Type 2 Diabetes among Obese Patients (Diabesity) among Patients Attending Selected Healthcare Facilities in Calabar, Nigeria

Authors: Anietie J. Atangwho, Udeme E. Asibong, Item J. Atangwho, Ndifreke E. Udonwa

Abstract:

Diabesity, a syndrome where diabetes and obesity occur simultaneously in a single patient, has emerged as a recent challenge to the medical world and is already at epidemic proportion in some countries. Therefore, this study aimed to determine the prevalence of diabesity among adult patients attending the General Outpatient clinic of three healthcare facilities in Calabar in a bid to improve healthcare delivery to patients at risk. A cross-sectional descriptive study design was employed using a mixed method approach that comprised quantitative and qualitative components i.e., Focused Group Discussion (FGD) and Key Informant Interview (KII). One hundred and ninety (190) participants aged 18 to 72 years and body mass index (BMI) ≥ 30kg/m2 were recruited as the study population for the quantitative study using systematic random sampling technique and analysed using SPSS version 25. The qualitative component performed 4 FGDs and 3 KIIs. Results of sociodemographic variables showed respondents aged 35 – 44 as highest in number (37.3%). Of this number, 83.7% were females, 76.8% married, and 3.7% earned USD1,110.00 monthly. Whereas majority of the participants (65.8 %) were within class 1 obesity, only 38% considered themselves obese. Diabesity occurrence was found to be 12.6% (i.e. BMI ≥ 30 to 45.2kg/m2 vs FBS ≥ 7.0 – 14.8mmo/l), with 38% of them being previously undiagnosed. About 48.4 % of the respondents ate two meals only per day; with 90.5% eating between meals. Snacking was predominant, mostly pastries (67.9%), with 58.9% taking cola drinks alongside. Sixty-one percent participated in one form of exercise or the other, with walking/trekking as the most common; 34.4 % had no regular exercise schedule. Only about 39.5% of the participants spent less than an hour on devices like phone, television, and laptops. Additionally, previously known and newly diagnosed hypertensive patients were 27.9% and 7.2%, respectively. Qualitative assessment with KII and FGDs showed eating unhealthy diets and lack of exercise as major factors responsible for diabesity. The bivariate analysis revealed significant association between diabesity with marital status and hypertension (p = 0.007 and p = 0.005, respectively). Also, positive association with diabesity were eating snacking (p = 0.017) and number of times a respondent snacks per day (p = 0.035). Overall, the study has revealed the occurrence of diabesity in Calabar at 12.6 % of the study population, with 38 % of them previously undiagnosed; it identified unhealthy diets and lack of exercise as causative factors as well as hypertension as snacking associatory indicators of diabesity.

Keywords: diabesity, obesity, diabetes, unhealthy diet

Procedia PDF Downloads 81
11318 The Influence of Emotion on Numerical Estimation: A Drone Operators’ Context

Authors: Ludovic Fabre, Paola Melani, Patrick Lemaire

Abstract:

The goal of this study was to test whether and how emotions influence drone operators in estimation skills. The empirical study was run in the context of numerical estimation. Participants saw a two-digit number together with a collection of cars. They had to indicate whether the stimuli collection was larger or smaller than the number. The two-digit numbers ranged from 12 to 27, and collections included 3-36 cars. The presentation of the collections was dynamic (each car moved 30 deg. per second on the right). Half the collections were smaller collections (including fewer than 20 cars), and the other collections were larger collections (i.e., more than 20 cars). Splits between the number of cars in a collection and the two-digit number were either small (± 1 or 2 units; e.g., the collection included 17 cars and the two-digit number was 19) or larger (± 8 or 9 units; e.g., 17 cars and '9'). Half the collections included more items (and half fewer items) than the number indicated by the two-digit number. Before and after each trial, participants saw an image inducing negative emotions (e.g., mutilations) or neutral emotions (e.g., candle) selected from International Affective Picture System (IAPS). At the end of each trial, participants had to say if the second picture was the same as or different from the first. Results showed different effects of emotions on RTs and percent errors. Participants’ performance was modulated by emotions. They were slower on negative trials compared to the neutral trials, especially on the most difficult items. They errored more on small-split than on large-split problems. Moreover, participants highly overestimated the number of cars when in a negative emotional state. These findings suggest that emotions influence numerical estimation, that effects of emotion in estimation interact with stimuli characteristics. They have important implications for understanding the role of emotions on estimation skills, and more generally, on how emotions influence cognition.

Keywords: drone operators, emotion, numerical estimation, arithmetic

Procedia PDF Downloads 117
11317 Young’s Modulus Variability: Influence on Masonry Vault Behavior

Authors: Abdelmounaim Zanaz, Sylvie Yotte, Fazia Fouchal, Alaa Chateauneuf

Abstract:

This paper presents a methodology for probabilistic assessment of bearing capacity and prediction of failure mechanism of masonry vaults at the ultimate state with consideration of the natural variability of Young’s modulus of stones. First, the computation model is explained. The failure mode is the most reported mode, i.e. the four-hinge mechanism. Based on this assumption, the study of a vault composed of 16 segments is presented. The Young’s modulus of the segments is considered as random variable defined by a mean value and a coefficient of variation CV. A relationship linking the vault bearing capacity to the modulus variation of voussoirs is proposed. The failure mechanisms, in addition to that observed in the deterministic case, are identified for each CV value as well as their probability of occurrence. The results show that the mechanism observed in the deterministic case has decreasing probability of occurrence in terms of CV, while the number of other mechanisms and their probability of occurrence increase with the coefficient of variation of Young’s modulus. This means that if a significant change in the Young modulus of the segments is proven, taken it into account in computations becomes mandatory, both for determining the vault bearing capacity and for predicting its failure mechanism.

Keywords: masonry, mechanism, probability, variability, vault

Procedia PDF Downloads 443
11316 Effect an Axial Magnetic Field in Co-rotating Flow Heated from Below

Authors: B. Mahfoud, A. Bendjagloli

Abstract:

The effect of an axial magnetic field on the flow produced by co-rotation of the top and bottom disks in a vertical cylindrical heated from below is numerically analyzed. The governing Navier-Stokes, energy, and potential equations are solved by using the finite-volume method. It was observed that the Reynolds number is increased, the axisymmetric basic state loses stability to circular patterns of axisymmetric vortices and spiral waves. In mixed convection case the axisymmetric mode disappears giving an asymmetric mode m=1. It was also found that the primary thresholds Recr corresponding to the modes m=1and 2, increase with increasing of the Hartmann number (Ha). Finally, stability diagrams have been established according to the numerical results of this investigation. These diagrams giving the evolution of the primary thresholds as a function of the Hartmann number for various values of the Richardson number.

Keywords: bifurcation, co-rotating end disks, magnetic field, stability diagrams, vortices

Procedia PDF Downloads 350
11315 A Comparative Study of Multi-SOM Algorithms for Determining the Optimal Number of Clusters

Authors: Imèn Khanchouch, Malika Charrad, Mohamed Limam

Abstract:

The interpretation of the quality of clusters and the determination of the optimal number of clusters is still a crucial problem in clustering. We focus in this paper on multi-SOM clustering method which overcomes the problem of extracting the number of clusters from the SOM map through the use of a clustering validity index. We then tested multi-SOM using real and artificial data sets with different evaluation criteria not used previously such as Davies Bouldin index, Dunn index and silhouette index. The developed multi-SOM algorithm is compared to k-means and Birch methods. Results show that it is more efficient than classical clustering methods.

Keywords: clustering, SOM, multi-SOM, DB index, Dunn index, silhouette index

Procedia PDF Downloads 599
11314 Upgraded Cuckoo Search Algorithm to Solve Optimisation Problems Using Gaussian Selection Operator and Neighbour Strategy Approach

Authors: Mukesh Kumar Shah, Tushar Gupta

Abstract:

An Upgraded Cuckoo Search Algorithm is proposed here to solve optimization problems based on the improvements made in the earlier versions of Cuckoo Search Algorithm. Short comings of the earlier versions like slow convergence, trap in local optima improved in the proposed version by random initialization of solution by suggesting an Improved Lambda Iteration Relaxation method, Random Gaussian Distribution Walk to improve local search and further proposing Greedy Selection to accelerate to optimized solution quickly and by “Study Nearby Strategy” to improve global search performance by avoiding trapping to local optima. It is further proposed to generate better solution by Crossover Operation. The proposed strategy used in algorithm shows superiority in terms of high convergence speed over several classical algorithms. Three standard algorithms were tested on a 6-generator standard test system and the results are presented which clearly demonstrate its superiority over other established algorithms. The algorithm is also capable of handling higher unit systems.

Keywords: economic dispatch, gaussian selection operator, prohibited operating zones, ramp rate limits

Procedia PDF Downloads 130
11313 Taguchi Method for Analyzing a Flexible Integrated Logistics Network

Authors: E. Behmanesh, J. Pannek

Abstract:

Logistics network design is known as one of the strategic decision problems. As these kinds of problems belong to the category of NP-hard problems, traditional ways are failed to find an optimal solution in short time. In this study, we attempt to involve reverse flow through an integrated design of forward/reverse supply chain network that formulated into a mixed integer linear programming. This Integrated, multi-stages model is enriched by three different delivery path which makes the problem more complex. To tackle with such an NP-hard problem a revised random path direct encoding method based memetic algorithm is considered as the solution methodology. Each algorithm has some parameters that need to be investigate to reveal the best performance. In this regard, Taguchi method is adapted to identify the optimum operating condition of the proposed memetic algorithm to improve the results. In this study, four factors namely, population size, crossover rate, local search iteration and a number of iteration are considered. Analyzing the parameters and improvement in results are the outlook of this research.

Keywords: integrated logistics network, flexible path, memetic algorithm, Taguchi method

Procedia PDF Downloads 191
11312 The Role of Sexual Satisfaction Sexual Satisfaction in Marital Satisfaction Married Men

Authors: Maghsoud Nader Pilehroud, Mohmmad Alizadeh, Soheila Golipour, Sedigeh Tajabadipour

Abstract:

Aim: in terms of importance, sexual issues are of the highest priority in married life issues and sexual compatibility is of the most important reasons of success in married life and consequently marital satisfaction.the present research was conducted with the aim of The role of sexual satisfaction sexual satisfaction in marital satisfaction married men. Study Design: this research is descriptive and is of correlation type.Method: The statistical population includes all the married men of Ardebil city out of which, 60 men were chosen using random sampling as the research samples. The research instruments were ENRICH couple scale and Hudson sexual satisfaction scale. The findings were analyzed using descriptive statistics method (mean and standard deviation) and inferential statistics (Pearson's correlation and regression) and SPSS-16 software. Results: the results showed that sexual satisfaction has a positive and significant relationship with marital satisfaction and all of its components, and that sexual satisfaction can predict marital satisfaction. The results also showed that sexual and marital satisfaction, are not significantly related to any of the variables of education level, duration of marriage and number of children. conclusion: according to the results, it can be claimed that sexual skills training for couples can be influential at increasing their martial satisfaction, and that also, sexual satisfaction has an important role in marital satisfaction.

Keywords: sexual satisfaction, marital satisfaction, married men, Iran

Procedia PDF Downloads 157
11311 The Relationship between Social Capital and Knowledge Sharing in the Ministry of Culture and Islamic Guidance(Iran)

Authors: Narges Sadat Myrmousavy, Maryam Eslampanah

Abstract:

The aim of this study was to investigate the relationship between social capital and knowledge sharing is the Ministry of Culture and Islamic Guidance. They are descriptive correlation study. The study sample consisted of all the experts in the Ministry of Culture and Islamic Guidance helping professionals headquarters in Tehran in the summer period is 2012, the number is 650. Random sampling is targeted. The sample size is 400. The data collection tool was a questionnaire that was used for the preparation of a standard questionnaire. They also examine the assumptions of the regression coefficient for the relationship between variables in order to investigate the main hypothesis test is used. The findings suggest that the structural and knowledge-sharing between components, there is a direct relationship. The components of the relationship between Impression management and knowledge sharing, there is a direct relationship. There was no significant relationship between Individual pro-social motives and knowledge sharing. Both components of the cognitive aspects of open mindedness and competence are directly related with knowledge sharing. Finally, the comparison between the different dimensions of social capital, the largest of its structure, and its relationship with knowledge sharing is the least relation.

Keywords: social capital, knowledge sharing, ministry of culture and Islamic guidance (Iran), open mindedness, pro-social motives

Procedia PDF Downloads 503
11310 Chaos Fuzzy Genetic Algorithm

Authors: Mohammad Jalali Varnamkhasti

Abstract:

The genetic algorithms have been very successful in handling difficult optimization problems. The fundamental problem in genetic algorithms is premature convergence. This paper, present a new fuzzy genetic algorithm based on chaotic values instead of the random values in genetic algorithm processes. In this algorithm, for initial population is used chaotic sequences and then a new sexual selection proposed for selection mechanism. In this technique, the population is divided such that the male and female would be selected in an alternate way. The layout of the male and female chromosomes in each generation is different. A female chromosome is selected by tournament selection size from the female group. Then, the male chromosome is selected, in order of preference based on the maximum Hamming distance between the male chromosome and the female chromosome or The highest fitness value of male chromosome (if more than one male chromosome is having the maximum Hamming distance existed), or Random selection. The selections of crossover and mutation operators are achieved by running the fuzzy logic controllers, the crossover and mutation probabilities are varied on the basis of the phenotype and genotype characteristics of the chromosome population. Computational experiments are conducted on the proposed techniques and the results are compared with some other operators, heuristic and local search algorithms commonly used for solving p-median problems published in the literature.

Keywords: genetic algorithm, fuzzy system, chaos, sexual selection

Procedia PDF Downloads 386
11309 On the Network Packet Loss Tolerance of SVM Based Activity Recognition

Authors: Gamze Uslu, Sebnem Baydere, Alper K. Demir

Abstract:

In this study, data loss tolerance of Support Vector Machines (SVM) based activity recognition model and multi activity classification performance when data are received over a lossy wireless sensor network is examined. Initially, the classification algorithm we use is evaluated in terms of resilience to random data loss with 3D acceleration sensor data for sitting, lying, walking and standing actions. The results show that the proposed classification method can recognize these activities successfully despite high data loss. Secondly, the effect of differentiated quality of service performance on activity recognition success is measured with activity data acquired from a multi hop wireless sensor network, which introduces high data loss. The effect of number of nodes on the reliability and multi activity classification success is demonstrated in simulation environment. To the best of our knowledge, the effect of data loss in a wireless sensor network on activity detection success rate of an SVM based classification algorithm has not been studied before.

Keywords: activity recognition, support vector machines, acceleration sensor, wireless sensor networks, packet loss

Procedia PDF Downloads 477
11308 Comparison between Separable and Irreducible Goppa Code in McEliece Cryptosystem

Authors: Newroz Nooralddin Abdulrazaq, Thuraya Mahmood Qaradaghi

Abstract:

The McEliece cryptosystem is an asymmetric type of cryptography based on error correction code. The classical McEliece used irreducible binary Goppa code which considered unbreakable until now especially with parameter [1024, 524, and 101], but it is suffering from large public key matrix which leads to be difficult to be used practically. In this work Irreducible and Separable Goppa codes have been introduced. The Irreducible and Separable Goppa codes used are with flexible parameters and dynamic error vectors. A Comparison between Separable and Irreducible Goppa code in McEliece Cryptosystem has been done. For encryption stage, to get better result for comparison, two types of testing have been chosen; in the first one the random message is constant while the parameters of Goppa code have been changed. But for the second test, the parameters of Goppa code are constant (m=8 and t=10) while the random message have been changed. The results show that the time needed to calculate parity check matrix in separable are higher than the one for irreducible McEliece cryptosystem, which is considered expected results due to calculate extra parity check matrix in decryption process for g2(z) in separable type, and the time needed to execute error locator in decryption stage in separable type is better than the time needed to calculate it in irreducible type. The proposed implementation has been done by Visual studio C#.

Keywords: McEliece cryptosystem, Goppa code, separable, irreducible

Procedia PDF Downloads 267
11307 Machine Learning Techniques in Seismic Risk Assessment of Structures

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.

Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine

Procedia PDF Downloads 106
11306 The Role of Disturbed Dry Afromontane Forest of Ethiopia for Biodiversity Conservation and Carbon Storage

Authors: Mindaye Teshome, Nesibu Yahya, Carlos Moreira Miquelino Eleto Torres, Pedro Manuel Villaa, Mehari Alebachew

Abstract:

Arbagugu forest is one of the remnant dry Afromontane forests under severe anthropogenic disturbances in central Ethiopia. Despite this fact, up-to-date information is lacking about the status of the forest and its role in climate change mitigation. In this study, we evaluated the woody species composition, structure, biomass, and carbon stock in this forest. We employed a systematic random sampling design and established fifty-three sample plots (20 × 100 m) to collect the vegetation data. A total of 37 woody species belonging to 25 families were recorded. The density of seedlings, saplings, and matured trees were 1174, 101, and 84 stems ha-1, respectively. The total basal area of trees with DBH (diameter at breast height) ≥ 2 cm was 21.3 m2 ha-1. The characteristic trees of dry Afromontane Forest such as Podocarpus falcatus, Juniperus procera, and Olea europaea subsp. cuspidata exhibited a fair regeneration status. On the contrary, the least abundant species Lepidotrichilia volkensii, Canthium oligocarpum, Dovyalis verrucosa, Calpurnia aurea, and Maesa lanceolata exhibited good regeneration status. Some tree species such as Polyscias fulva, Schefflera abyssinica, Erythrina brucei, and Apodytes dimidiata lack regeneration. The total carbon stored in the forest ranged between 6.3 Mg C ha-1 and 835.6 Mg C ha-1. This value is equivalent to 639.6 Mg C ha-1. The forest had a very low number of woody species composition and diversity. The regeneration study also revealed that a significant number of tree species had unsatisfactory regeneration status. Besides, the forest had a lower carbon stock density compared with other dry Afromontane forests. This implies the urgent need for forest conservation and restoration activities by the local government, conservation practitioners, and other concerned bodies to maintain the forest and sustain the various ecosystem goods and services provided by the Arbagugu forest.

Keywords: aboveground biomass, forest regeneration, climate change, biodiversity conservation, restoration

Procedia PDF Downloads 110
11305 Optimal Design of Step-Stress Partially Life Test Using Multiply Censored Exponential Data with Random Removals

Authors: Showkat Ahmad Lone, Ahmadur Rahman, Ariful Islam

Abstract:

The major assumption in accelerated life tests (ALT) is that the mathematical model relating the lifetime of a test unit and the stress are known or can be assumed. In some cases, such life–stress relationships are not known and cannot be assumed, i.e. ALT data cannot be extrapolated to use condition. So, in such cases, partially accelerated life test (PALT) is a more suitable test to be performed for which tested units are subjected to both normal and accelerated conditions. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests using progressive failure-censored hybrid data with random removals. The life data of the units under test is considered to follow exponential life distribution. The removals from the test are assumed to have binomial distributions. The point and interval maximum likelihood estimations are obtained for unknown distribution parameters and tampering coefficient. An optimum test plan is developed using the D-optimality criterion. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.

Keywords: binomial distribution, d-optimality, multiple censoring, optimal design, partially accelerated life testing, simulation study

Procedia PDF Downloads 322
11304 Experimental Investigation of Boundary Layer Transition on Rotating Cones in Axial Flow in 0 and 35 Degrees Angle of Attack

Authors: Ali Kargar, Kamyar Mansour

Abstract:

In this paper, experimental results of using hot wire anemometer and smoke visualization are presented. The results obtained on the hot wire anemometer for critical Reynolds number and transitional Reynolds number are compared by previous results. Excellent agreement is found for the transitional Reynolds number. The results for the transitional Reynolds number are also compared by previous linear stability results. The results of the smoke visualization clearly show the cross flow vortices which arise in the transition process from a laminar to a turbulent flow. A non-zero angle of attack is also considered. We compare our results by linear stability theory which was done by Garret et. Al (2007). We just emphasis, Also the visualization and hot wire anemometer results have been compared graphically. The goal in this paper is to check reliability of using hot wire anemometer and smoke visualization in transition problems and check reliability of linear stability theory for this case and compare our results with some trusty experimental works.

Keywords: transitional reynolds number, wind tunnel, rotating cone, smoke visualization

Procedia PDF Downloads 307
11303 High Performance Computing Enhancement of Agent-Based Economic Models

Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna

Abstract:

This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).

Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process

Procedia PDF Downloads 130
11302 Feeling Ambivalence Towards Values

Authors: Aysheh Maslemani, Ruth Mayo, Greg Maio, Ariel Knafo-Noam

Abstract:

Values are abstract ideals that serve as guiding principles in one's life. As inherently positive and desirable concepts, values are seen as motivators for actions and behaviors. However, research has largely ignored the possibility that values may elicit negative feelings despite being explicitly important to us. In the current study, we aim to examine this possibility. Four hundred participants over 18 years(M=41.6, SD=13.7, Female=178) from the UK completed a questionnaire in which they were asked to indicate their level of positive/negative feelings towards a comprehensive list of values and then report the importance of these values to them. The results support our argument by showing that people can have negative feelings towards their values and that people can feel both positive and negative emotions towards their values simultaneously, which means feeling ambivalence. We ran a mixed-effect model with ambivalence, value type, and their interaction as fixed effects, with by subject random intercept and by subject random slope for ambivalence. The results reveal that values that elicit less ambivalence predicted higher ratings for value importance. This research contributes to the field of values on multiple levels. Theoretically, it will uncover new insights about values, such as the existence of negative emotions towards them and the presence of ambivalence towards values. These findings may inspire future studies to explore the effects of ambivalence on people's well-being, behaviors, cognition, and their affect. We discuss the findings and consider their implications for understanding the social psychological mechanisms underpinning value ambivalence.

Keywords: emotion, social cognition, values., ambivalence

Procedia PDF Downloads 68
11301 Feeling Ambivalence Towards Yours Values

Authors: Aysheh Maslemani, Ruth Mayo, Greg Maio, Ariel Knafo-Noam

Abstract:

Values are abstract ideals that serve as guiding principles in one's life. As inherently positive and desirable concepts, values are seen as motivators for actions and behaviors. However, research has largely ignored the possibility that values may elicit negative feelings despite being explicitly important to us. In the current study we aim to examine this possibility. Four hundred participants over 18 years(M=41.6,SD=13.7,Female=178) from the UK completed a questionnaire in which they were asked to indicate their level of positive/negative feelings towards a comprehensive list of values and then report the importance of these values to them. The results support our argument by showing that people can have negative feelings towards their values and that people can feel both positive and negative emotions towards their values simultaneously, which means feeling ambivalence. We ran a mixed-effect model with ambivalence, value type, and their interaction as fixed effects, with by subject random intercept, and by subject random slope for ambivalence. The results reveal that values that elicit less ambivalence predicted higher ratings for value importance. This research contributes to the field of values on multiple levels. Theoretically, it will uncover new insights about values, such as the existence of negative emotions towards them, the presence of ambivalence towards values. These findings may inspire future studies to explore the effects of ambivalence on people's well-being, behaviors, cognition, and their affect. We discuss the findings and consider their implications for understanding the social psychological mechanisms underpinning value ambivalence.

Keywords: ambivalence, emotion, social cognition, values

Procedia PDF Downloads 67