Search results for: fault tree analysis (FTA) method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 39688

Search results for: fault tree analysis (FTA) method

36418 A Prediction Method for Large-Size Event Occurrences in the Sandpile Model

Authors: S. Channgam, A. Sae-Tang, T. Termsaithong

Abstract:

In this research, the occurrences of large size events in various system sizes of the Bak-Tang-Wiesenfeld sandpile model are considered. The system sizes (square lattice) of model considered here are 25×25, 50×50, 75×75 and 100×100. The cross-correlation between the ratio of sites containing 3 grain time series and the large size event time series for these 4 system sizes are also analyzed. Moreover, a prediction method of the large-size event for the 50×50 system size is also introduced. Lastly, it can be shown that this prediction method provides a slightly higher efficiency than random predictions.

Keywords: Bak-Tang-Wiesenfeld sandpile model, cross-correlation, avalanches, prediction method

Procedia PDF Downloads 360
36417 Computer Simulations of Stress Corrosion Studies of Quartz Particulate Reinforced ZA-27 Metal Matrix Composites

Authors: K. Vinutha

Abstract:

The stress corrosion resistance of ZA-27 / TiO2 metal matrix composites (MMC’s) in high temperature acidic media has been evaluated using an autoclave. The liquid melt metallurgy technique using vortex method was used to fabricate MMC’s. TiO2 particulates of 50-80 µm in size are added to the matrix. ZA-27 containing 2,4,6 weight percentage of TiO2 are prepared. Stress corrosion tests were conducted by weight loss method for different exposure time, normality and temperature of the acidic medium. The corrosion rates of composites were lower to that of matrix ZA-27 alloy under all conditions.

Keywords: autoclave, MMC’s, stress corrosion, vortex method

Procedia PDF Downloads 455
36416 Theoretical Modeling of Mechanical Properties of Eco-Friendly Composites Derived from Sugar Palm

Authors: J. Sahari, S. M. Sapuan

Abstract:

Eco-friendly composites have been successfully prepared by using sugar palm tree as a sources. The effect of fibre content on mechanical properties of (SPF/SPS) biocomposites have been done and the experimentally tensile properties (tensile strength and modulus) of biocomposites have been compared with the existing theories of reinforcement. The biocomposites were prepared with different amounts of fibres (i.e. 10%, 20% and 30% by weight percent). The mechanical properties of plasticized SPS improved with the incorporation of fibres. Both approaches (experimental and theoretical) show that the young’s modulus of the biocomposites is consistently increased when the sugar palm fibre (SPF) are placed into the sugar palm starch matrix (SPS). Surface morphological study through scanning electron microscopy showed homogeneous distribution of fibres and matrix with good adhesion which play an important role in improving the mechanical properties of biocomposites. The observed deviations between the experimental and theoretical values are explained by the simplifying model assumptions applied for the configuration of the composites, in particular the sugar palm starch composites.

Keywords: eco-friendly, biocomposite, mechanical, experimental, theoretical

Procedia PDF Downloads 427
36415 PM10 Chemical Characteristics in a Background Site at the Universidad Libre Bogotá

Authors: Laura X. Martinez, Andrés F. Rodríguez, Ruth A. Catacoli

Abstract:

One of the most important factors for air pollution is that the concentrations of PM10 maintain a constant trend, with the exception of some places where that frequently surpasses the allowed ranges established by Colombian legislation. The community that surrounds the Universidad Libre Bogotá is inhabited by a considerable number of students and workers, all of whom are possibly being exposed to PM10 for long periods of time while on campus. Thus, the chemical characterization of PM10 found in the ambient air at the Universidad Libre Bogotá was identified as a problem. A Hi-Vol sampler and EPA Test Method 5 were used to determine if the quality of air is adequate for the human respiratory system. Additionally, quartz fiber filters were utilized during sampling. Samples were taken three days a week during a dry period throughout the months of November and December 2015. The gravimetric analysis method was used to determine PM10 concentrations. The chemical characterization includes non-conventional carcinogenic pollutants. Atomic absorption spectrophotometry (AAS) was used for the determination of metals and VOCs were analyzed using the FTIR (Fourier transform infrared spectroscopy) method. In this way, concentrations of PM10, ranging from values of 13 µg/m3 to 66 µg/m3, were obtained; these values were below standard conditions. This evidence concludes that the PM10 concentrations during an exposure period of 24 hours are lower than the values established by Colombian law, Resolution 610 of 2010; however, when comparing these with the limits set by the World Health Organization (WHO), these concentrations could possibly exceed permissible levels.

Keywords: air quality, atomic absorption spectrophotometry, gas chromatography, particulate matter

Procedia PDF Downloads 235
36414 Transdisciplinary Methodological Innovation: Connecting Natural and Social Sciences Research through a Training Toolbox

Authors: Jessica M. Black

Abstract:

Although much of natural and social science research aims to enhance human flourishing and address social problems, the training within the two fields is significantly different across theory, methodology, and implementation of results. Social scientists are trained in social, psychological, and to the extent that it is relevant to their discipline, spiritual development, theory, and accompanying methodologies. They tend not to receive training or learn about accompanying methodology related to interrogating human development and social problems from a biological perspective. On the other hand, those in the natural sciences, and for the purpose of this work, human biological sciences specifically – biology, neuroscience, genetics, epigenetics, and physiology – are often trained first to consider cellular development and related methodologies, and may not have opportunity to receive formal training in many of the foundational principles that guide human development, such as systems theory or person-in-environment framework, methodology related to tapping both proximal and distal psycho-social-spiritual influences on human development, and foundational principles of equity, justice and inclusion in research design. There is a need for disciplines heretofore siloed to know one another, to receive streamlined, easy to access training in theory and methods from one another and to learn how to build interdisciplinary teams that can speak and act upon a shared research language. Team science is more essential than ever, as are transdisciplinary approaches to training and research design. This study explores the use of a methodological toolbox that natural and social scientists can use by employing a decision-making tree regarding project aims, costs, and participants, among other important study variables. The decision tree begins with a decision about whether the researcher wants to learn more about social sciences approaches or biological approaches to study design. The toolbox and platform are flexible, such that users could also choose among modules, for instance, reviewing epigenetics or community-based participatory research even if those are aspects already a part of their home field. To start, both natural and social scientists would receive training on systems science, team science, transdisciplinary approaches, and translational science. Next, social scientists would receive training on grounding biological theory and the following methodological approaches and tools: physiology, (epi)genetics, non-invasive neuroimaging, invasive neuroimaging, endocrinology, and the gut-brain connection. Natural scientists would receive training on grounding social science theory, and measurement including variables, assessment and surveys on human development as related to the developing person (e.g., temperament and identity), microsystems (e.g., systems that directly interact with the person such as family and peers), mesosystems (e.g., systems that interact with one another but do not directly interact with the individual person, such as parent and teacher relationships with one another), exosystems (e.g., spaces and settings that may come back to affect the individual person, such as a parent’s work environment, but within which the individual does not directly interact, macrosystems (e.g., wider culture and policy), and the chronosystem (e.g., historical time, such as the generational impact of trauma). Participants will be able to engage with the toolbox and one another to foster increased transdisciplinary work

Keywords: methodology, natural science, social science, transdisciplinary

Procedia PDF Downloads 89
36413 An Analysis of Language Borrowing among Algerian University Students Using Online Facebook Conversations

Authors: Messaouda Annab

Abstract:

The rapid development of technology has led to an important context in which different languages and structures are used in the same conversations. This paper investigates the practice of language borrowing within social media platform, namely, Facebook among Algerian Vernacular Arabic (AVA) students. In other words, this study will explore how Algerian students have incorporated lexical English borrowing in their online conversations. This paper will examine the relationships between language, culture and identity among a multilingual group. The main objective is to determine the cultural and linguistic functions that borrowing fulfills in social media and to explain the possible factors underlying English borrowing. The nature of the study entails the use of an online research method that includes ten online Facebook conversations in the form of private messages collected from Bachelor and Masters Algerian students recruited from the English department at the University of Oum El-Bouaghi. The analysis of data revealed that social media platform provided the users with opportunities to shift from one language to another. This practice was noticed in students’ online conversations. English borrowing was the most relevant language performance in accordance with Arabic which is the mother tongue of the chosen sample. The analysis has assumed that participants are skilled in more than one language.

Keywords: borrowing, language performance, linguistic background, social media

Procedia PDF Downloads 139
36412 Analysis of the Engineering Judgement Influence on the Selection of Geotechnical Parameters Characteristic Values

Authors: K. Ivandic, F. Dodigovic, D. Stuhec, S. Strelec

Abstract:

A characteristic value of certain geotechnical parameter results from an engineering assessment. Its selection has to be based on technical principles and standards of engineering practice. It has been shown that the results of engineering assessment of different authors for the same problem and input data are significantly dispersed. A survey was conducted in which participants had to estimate the force that causes a 10 cm displacement at the top of a axially in-situ compressed pile. Fifty experts from all over the world took part in it. The lowest estimated force value was 42% and the highest was 133% of measured force resulting from a mentioned static pile load test. These extreme values result in significantly different technical solutions to the same engineering task. In case of selecting a characteristic value of a geotechnical parameter the importance of the influence of an engineering assessment can be reduced by using statistical methods. An informative annex of Eurocode 1 prescribes the method of selecting the characteristic values of material properties. This is followed by Eurocode 7 with certain specificities linked to selecting characteristic values of geotechnical parameters. The paper shows the procedure of selecting characteristic values of a geotechnical parameter by using a statistical method with different initial conditions. The aim of the paper is to quantify an engineering assessment in the example of determining a characteristic value of a specific geotechnical parameter. It is assumed that this assessment is a random variable and that its statistical features will be determined. For this purpose, a survey research was conducted among relevant experts from the field of geotechnical engineering. Conclusively, the results of the survey and the application of statistical method were compared.

Keywords: characteristic values, engineering judgement, Eurocode 7, statistical methods

Procedia PDF Downloads 277
36411 Model Averaging in a Multiplicative Heteroscedastic Model

Authors: Alan Wan

Abstract:

In recent years, the body of literature on frequentist model averaging in statistics has grown significantly. Most of this work focuses on models with different mean structures but leaves out the variance consideration. In this paper, we consider a regression model with multiplicative heteroscedasticity and develop a model averaging method that combines maximum likelihood estimators of unknown parameters in both the mean and variance functions of the model. Our weight choice criterion is based on a minimisation of a plug-in estimator of the model average estimator's squared prediction risk. We prove that the new estimator possesses an asymptotic optimality property. Our investigation of finite-sample performance by simulations demonstrates that the new estimator frequently exhibits very favourable properties compared to some existing heteroscedasticity-robust model average estimators. The model averaging method hedges against the selection of very bad models and serves as a remedy to variance function misspecification, which often discourages practitioners from modeling heteroscedasticity altogether. The proposed model average estimator is applied to the analysis of two real data sets.

Keywords: heteroscedasticity-robust, model averaging, multiplicative heteroscedasticity, plug-in, squared prediction risk

Procedia PDF Downloads 347
36410 Searchable Encryption in Cloud Storage

Authors: Ren Junn Hwang, Chung-Chien Lu, Jain-Shing Wu

Abstract:

Cloud outsource storage is one of important services in cloud computing. Cloud users upload data to cloud servers to reduce the cost of managing data and maintaining hardware and software. To ensure data confidentiality, users can encrypt their files before uploading them to a cloud system. However, retrieving the target file from the encrypted files exactly is difficult for cloud server. This study proposes a protocol for performing multikeyword searches for encrypted cloud data by applying k-nearest neighbor technology. The protocol ranks the relevance scores of encrypted files and keywords, and prevents cloud servers from learning search keywords submitted by a cloud user. To reduce the costs of file transfer communication, the cloud server returns encrypted files in order of relevance. Moreover, when a cloud user inputs an incorrect keyword and the number of wrong alphabet does not exceed a given threshold; the user still can retrieve the target files from cloud server. In addition, the proposed scheme satisfies security requirements for outsourced data storage.

Keywords: fault-tolerance search, multi-keywords search, outsource storage, ranked search, searchable encryption

Procedia PDF Downloads 355
36409 Pricing European Continuous-Installment Options under Regime-Switching Models

Authors: Saghar Heidari

Abstract:

In this paper, we study the valuation problem of European continuous-installment options under Markov-modulated models with a partial differential equation approach. Due to the opportunity for continuing or stopping to pay installments, the valuation problem under regime-switching models can be formulated as coupled partial differential equations (CPDE) with free boundary features. To value the installment options, we express the truncated CPDE as a linear complementarity problem (LCP), then a finite element method is proposed to solve the resulted variational inequality. Under some appropriate assumptions, we establish the stability of the method and illustrate some numerical results to examine the rate of convergence and accuracy of the proposed method for the pricing problem under the regime-switching model.

Keywords: continuous-installment option, European option, regime-switching model, finite element method

Procedia PDF Downloads 121
36408 A Robust System for Foot Arch Type Classification from Static Foot Pressure Distribution Data Using Linear Discriminant Analysis

Authors: R. Periyasamy, Deepak Joshi, Sneh Anand

Abstract:

Foot posture assessment is important to evaluate foot type, causing gait and postural defects in all age groups. Although different methods are used for classification of foot arch type in clinical/research examination, there is no clear approach for selecting the most appropriate measurement system. Therefore, the aim of this study was to develop a system for evaluation of foot type as clinical decision-making aids for diagnosis of flat and normal arch based on the Arch Index (AI) and foot pressure distribution parameter - Power Ratio (PR) data. The accuracy of the system was evaluated for 27 subjects with age ranging from 24 to 65 years. Foot area measurements (hind foot, mid foot, and forefoot) were acquired simultaneously from foot pressure intensity image using portable PedoPowerGraph system and analysis of the image in frequency domain to obtain foot pressure distribution parameter - PR data. From our results, we obtain 100% classification accuracy of normal and flat foot by using the linear discriminant analysis method. We observe there is no misclassification of foot types because of incorporating foot pressure distribution data instead of only arch index (AI). We found that the mid-foot pressure distribution ratio data and arch index (AI) value are well correlated to foot arch type based on visual analysis. Therefore, this paper suggests that the proposed system is accurate and easy to determine foot arch type from arch index (AI), as well as incorporating mid-foot pressure distribution ratio data instead of physical area of contact. Hence, such computational tool based system can help the clinicians for assessment of foot structure and cross-check their diagnosis of flat foot from mid-foot pressure distribution.

Keywords: arch index, computational tool, static foot pressure intensity image, foot pressure distribution, linear discriminant analysis

Procedia PDF Downloads 486
36407 Feedback from a Service Evaluation of a Modified Intrauterine Device Insertor: A First Step to a Changement of the Standard of Iud Insertion Procedure

Authors: Desjardin, Michaels, Martinez, Ulmann

Abstract:

Copper IUD is one of the most efficient and cost-effective contraception. However, pain at insertion hampers the use of this method. This is especially unfortunate in nulliparous women, often younger, who are excellent candidates for this contraception, including Emergency Contraception. Standard insertion procedure of a copper IUD usually involves measurement of uterine cavity with an hysterometer and the use of a tenaculum in order to facilitate device insertion. Both procedures lead to patient pain which often constitutes a limitation of the method. To overcome these issues, we have developed a modified insertor combined with a copper IUD. The singular design of the inserter includes a flexible inflatable membrane technology allowing an easy access to the uterine cavity even in case of abnormal uterine positions or narrow cervical canal. Moreover, this inserter makes possible a direct IUD insertion with no hysterometry and no need for tenaculum. To assess device effectiveness and patient-reported pain, a study was conducted at two clinics in Fance with 31 individuals who wanted to use a copper IUD as contraceptive method. IUD insertions have been performed by four healthcare providers. Operators completed questionnaire and evaluated effectiveness of the procedure (including IUD correct fundal placement and other usability questions) as their satisfaction. Patient also completed questionnaire and pain during procedure was measured on a 10-cm Visual Analogue Scale (VAS). Analysis of the questionnaires indicates that correct IUD placement took place in more than 93% of women, which is a standard efficacy rate. It also demonstrates that IUD insertion resulted in no, light or moderate pain predominantly in nulliparous women. No insertion resulted in severe pain (none above 6cm on a 10-cm VAS). This translated by a high level of satisfaction from both patients and practitioners. In addition, this modified inserter allowed a simplification of the insertion procedure: correct fundal placement was ensured with no need for hysterometry (100%) prior to insertion nor for cervical tenaculum to pull on the cervix (90%). Avoidance of both procedures contributed to the decrease in pain during insertion. Taken together, the results of the study demonstrate that this device constitutes a significant advance in the use of copper IUDs for any woman. It allows a simplification of the insertion procedure: there is no need for pre-insertion hysterometry and no need for traction on the cervix with tenaculum. Increased comfort during insertion should allow a wider use of the method for nulliparous women and for emergency contraception. In addition, pain is often underestimated by practitioners, but fear of pain is obviously one of the blocking factors as indicated by the analysis of the questionnaire. This evaluation brings interesting information on the use of this modified inserter for standard copper IUD and promising perspectives to set up a changement in the standard of IUD insertion procedure.

Keywords: contraceptio, IUD, innovation, pain

Procedia PDF Downloads 61
36406 Fake News Detection for Korean News Using Machine Learning Techniques

Authors: Tae-Uk Yun, Pullip Chung, Kee-Young Kwahk, Hyunchul Ahn

Abstract:

Fake news is defined as the news articles that are intentionally and verifiably false, and could mislead readers. Spread of fake news may provoke anxiety, chaos, fear, or irrational decisions of the public. Thus, detecting fake news and preventing its spread has become very important issue in our society. However, due to the huge amount of fake news produced every day, it is almost impossible to identify it by a human. Under this context, researchers have tried to develop automated fake news detection using machine learning techniques over the past years. But, there have been no prior studies proposed an automated fake news detection method for Korean news to our best knowledge. In this study, we aim to detect Korean fake news using text mining and machine learning techniques. Our proposed method consists of two steps. In the first step, the news contents to be analyzed is convert to quantified values using various text mining techniques (topic modeling, TF-IDF, and so on). After that, in step 2, classifiers are trained using the values produced in step 1. As the classifiers, machine learning techniques such as logistic regression, backpropagation network, support vector machine, and deep neural network can be applied. To validate the effectiveness of the proposed method, we collected about 200 short Korean news from Seoul National University’s FactCheck. which provides with detailed analysis reports from 20 media outlets and links to source documents for each case. Using this dataset, we will identify which text features are important as well as which classifiers are effective in detecting Korean fake news.

Keywords: fake news detection, Korean news, machine learning, text mining

Procedia PDF Downloads 251
36405 Divergence Regularization Method for Solving Ill-Posed Cauchy Problem for the Helmholtz Equation

Authors: Benedict Barnes, Anthony Y. Aidoo

Abstract:

A Divergence Regularization Method (DRM) is used to regularize the ill-posed Helmholtz equation where the boundary deflection is inhomogeneous in a Hilbert space H. The DRM incorporates a positive integer scaler which homogenizes the inhomogeneous boundary deflection in Cauchy problem of the Helmholtz equation. This ensures the existence, as well as, uniqueness of solution for the equation. The DRM restores all the three conditions of well-posedness in the sense of Hadamard.

Keywords: divergence regularization method, Helmholtz equation, ill-posed inhomogeneous Cauchy boundary conditions

Procedia PDF Downloads 171
36404 Prevalence of Workplace Bullying in Hong Kong: A Latent Class Analysis

Authors: Catalina Sau Man Ng

Abstract:

Workplace bullying is generally defined as a form of direct and indirect maltreatment at work including harassing, offending, socially isolating someone or negatively affecting someone’s work tasks. Workplace bullying is unfortunately commonplace around the world, which makes it a social phenomenon worth researching. However, the measurements and estimation methods of workplace bullying seem to be diverse in different studies, leading to dubious results. Hence, this paper attempts to examine the prevalence of workplace bullying in Hong Kong using the latent class analysis approach. It is often argued that the traditional classification of workplace bullying into the dichotomous 'victims' and 'non-victims' may not be able to fully represent the complex phenomenon of bullying. By treating workplace bullying as one latent variable and examining the potential categorical distribution within the latent variable, a more thorough understanding of workplace bullying in real-life situations may hence be provided. As a result, this study adopts a latent class analysis method, which was tested to demonstrate higher construct and higher predictive validity previously. In the present study, a representative sample of 2814 employees (Male: 54.7%, Female: 45.3%) in Hong Kong was recruited. The participants were asked to fill in a self-reported questionnaire which included measurements such as Chinese Workplace Bullying Scale (CWBS) and Chinese Version of Depression Anxiety Stress Scale (DASS). It is estimated that four latent classes will emerge: 'non-victims', 'seldom bullied', 'sometimes bullied', and 'victims'. The results of each latent class and implications of the study will also be discussed in this working paper.

Keywords: latent class analysis, prevalence, survey, workplace bullying

Procedia PDF Downloads 305
36403 Constructivism Learning Management in Mathematics Analysis Courses

Authors: Komon Paisal

Abstract:

The purposes of this research were (1) to create a learning activity for constructivism, (2) study the Mathematical Analysis courses learning achievement, and (3) study students’ attitude toward the learning activity for constructivism. The samples in this study were divided into 2 parts including 3 Mathematical Analysis courses instructors of Suan Sunandha Rajabhat University who provided basic information and attended the seminar and 17 Mathematical Analysis courses students who were studying in the academic and engaging in the learning activity for constructivism. The research instruments were lesson plans constructivism, subjective Mathematical Analysis courses achievement test with reliability index of 0.8119, and an attitude test concerning the students’ attitude toward the Mathematical Analysis courses learning activity for constructivism. The result of the research show that the efficiency of the Mathematical Analysis courses learning activity for constructivism is 73.05/72.16, which is more than expected criteria of 70/70. The research additionally find that the average score of learning achievement of students who engaged in the learning activities for constructivism are equal to 70% and the students’ attitude toward the learning activity for constructivism are at the medium level.

Keywords: constructivism, learning management, mathematics analysis courses, learning activity

Procedia PDF Downloads 515
36402 Event Extraction, Analysis, and Event Linking

Authors: Anam Alam, Rahim Jamaluddin Kanji

Abstract:

With the rapid growth of event in everywhere, event extraction has now become an important matter to retrieve the information from the unstructured data. One of the challenging problems is to extract the event from it. An event is an observable occurrence of interaction among entities. The paper investigates the effectiveness of event extraction capabilities of three software tools that are Wandora, Nitro and SPSS. We performed standard text mining techniques of these tools on the data sets of (i) Afghan War Diaries (AWD collection), (ii) MUC4 and (iii) WebKB. Information retrieval measures such as precision and recall which are computed under extensive set of experiments for Event Extraction. The experimental study analyzes the difference between events extracted by the software and human. This approach helps to construct an algorithm that will be applied for different machine learning methods.

Keywords: event extraction, Wandora, nitro, SPSS, event analysis, extraction method, AFG, Afghan War Diaries, MUC4, 4 universities, dataset, algorithm, precision, recall, evaluation

Procedia PDF Downloads 568
36401 Systematic Review of Functional Analysis in Brazil

Authors: Felipe Magalhaes Lemos

Abstract:

Functional behavior analysis is a procedure that has been studied for several decades by behavior analysts. In Brazil, we still have few studies in the area, so it was decided to carry out a systematic review of the articles published in the area by Brazilians. A search was done on the following scientific article registration sites: PsycINFO, ERIC, ISI Web of Science, Virtual Health Library. The research includes (a) peer-reviewed studies that (b) have been carried out in Brazil containing (c) functional assessment as a pre-treatment through (d) experimental procedures, direct or indirect observation and measurement of behavior problems (e) demonstrating a relationship between environmental events and behavior. During the review, 234 papers were found; however, only 9 were included in the final analysis. Of the 9 articles extracted, only 2 presented functional analysis procedures with manipulation of environmental variables, while the other 7 presented different procedures for a descriptive behavior assessment. Only the two studies using "functional analysis" used graphs to demonstrate the prevalent function of the behavior. Other studies described procedures and did not make clear the causal relationship between environment and behavior. There is still confusion in Brazil regarding the terms "functional analysis", "descriptive assessment" and "contingency analysis," which are generally treated in the same way. This study shows that few articles are published with a focus on functional analysis in Brazil.

Keywords: behavior, contingency, descriptive assessment, functional analysis

Procedia PDF Downloads 126
36400 A New Method to Winner Determination for Economic Resource Allocation in Cloud Computing Systems

Authors: Ebrahim Behrouzian Nejad, Rezvan Alipoor Sabzevari

Abstract:

Cloud computing systems are large-scale distributed systems, so that they focus more on large scale resource sharing, cooperation of several organizations and their use in new applications. One of the main challenges in this realm is resource allocation. There are many different ways to resource allocation in cloud computing. One of the common methods to resource allocation are economic methods. Among these methods, the auction-based method has greater prominence compared with Fixed-Price method. The double combinatorial auction is one of the proper ways of resource allocation in cloud computing. This method includes two phases: winner determination and resource allocation. In this paper a new method has been presented to determine winner in double combinatorial auction-based resource allocation using Imperialist Competitive Algorithm (ICA). The experimental results show that in our new proposed the number of winner users is higher than genetic algorithm. On other hand, in proposed algorithm, the number of winner providers is higher in genetic algorithm.

Keywords: cloud computing, resource allocation, double auction, winner determination

Procedia PDF Downloads 343
36399 Analysis of Factors Affecting the Number of Infant and Maternal Mortality in East Java with Geographically Weighted Bivariate Generalized Poisson Regression Method

Authors: Luh Eka Suryani, Purhadi

Abstract:

Poisson regression is a non-linear regression model with response variable in the form of count data that follows Poisson distribution. Modeling for a pair of count data that show high correlation can be analyzed by Poisson Bivariate Regression. Data, the number of infant mortality and maternal mortality, are count data that can be analyzed by Poisson Bivariate Regression. The Poisson regression assumption is an equidispersion where the mean and variance values are equal. However, the actual count data has a variance value which can be greater or less than the mean value (overdispersion and underdispersion). Violations of this assumption can be overcome by applying Generalized Poisson Regression. Characteristics of each regency can affect the number of cases occurred. This issue can be overcome by spatial analysis called geographically weighted regression. This study analyzes the number of infant mortality and maternal mortality based on conditions in East Java in 2016 using Geographically Weighted Bivariate Generalized Poisson Regression (GWBGPR) method. Modeling is done with adaptive bisquare Kernel weighting which produces 3 regency groups based on infant mortality rate and 5 regency groups based on maternal mortality rate. Variables that significantly influence the number of infant and maternal mortality are the percentages of pregnant women visit health workers at least 4 times during pregnancy, pregnant women get Fe3 tablets, obstetric complication handled, clean household and healthy behavior, and married women with the first marriage age under 18 years.

Keywords: adaptive bisquare kernel, GWBGPR, infant mortality, maternal mortality, overdispersion

Procedia PDF Downloads 144
36398 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 29
36397 Analysis of Accurate Direct-Estimation of the Maximum Power Point and Thermal Characteristics of High Concentration Photovoltaic Modules

Authors: Yan-Wen Wang, Chu-Yang Chou, Jen-Cheng Wang, Min-Sheng Liao, Hsuan-Hsiang Hsu, Cheng-Ying Chou, Chen-Kang Huang, Kun-Chang Kuo, Joe-Air Jiang

Abstract:

Performance-related parameters of high concentration photovoltaic (HCPV) modules (e.g. current and voltage) are required when estimating the maximum power point using numerical and approximation methods. The maximum power point on the characteristic curve for a photovoltaic module varies when temperature or solar radiation is different. It is also difficult to estimate the output performance and maximum power point (MPP) due to the special characteristics of HCPV modules. Based on the p-n junction semiconductor theory, a brand new and simple method is presented in this study to directly evaluate the MPP of HCPV modules. The MPP of HCPV modules can be determined from an irradiated I-V characteristic curve, because there is a non-linear relationship between the temperature of a solar cell and solar radiation. Numerical simulations and field tests are conducted to examine the characteristics of HCPV modules during maximum output power tracking. The performance of the presented method is evaluated by examining the dependence of temperature and irradiation intensity on the MPP characteristics of HCPV modules. These results show that the presented method allows HCPV modules to achieve their maximum power and perform power tracking under various operation conditions. A 0.1% error is found between the estimated and the real maximum power point.

Keywords: energy performance, high concentrated photovoltaic, maximum power point, p-n junction semiconductor

Procedia PDF Downloads 559
36396 Estimation of Carbon Losses in Rice: Wheat Cropping System of Punjab, Pakistan

Authors: Saeed Qaisrani

Abstract:

The study was conducted to observe carbon and nutrient loss by burning of rice residues on rice-wheat cropping system The rice crop was harvested to conduct the experiment in a randomized complete block design (RCBD) with factors and 4 replications with a net plot size of 10 m x 20 m. Rice stubbles were managed by two methods i.e. Incorporation & burning of rice residues. Soil samples were taken to a depth of 30 cm before sowing & after harvesting of wheat. Wheat was sown after harvesting of rice by three practices i.e. Conventional tillage, Minimum tillage and Zero tillage to observe best tillage practices. Laboratory and field experiments were conducted on wheat to assess best tillage practice and residues management method with estimation of carbon losses. Data on the following parameters; establishment count, plant height, spike length, number of grains per spike, biological yield, fat content, carbohydrate content, protein content, and harvest index were recorded to check wheat quality & ensuring food security in the region. Soil physico-chemical analysis i.e. pH, electrical conductivity, organic matter, nitrogen, phosphorus, potassium, and carbon were done in soil fertility laboratory. Substantial results were found on growth, yield and related parameters of wheat crop. The collected data were examined statistically with economic analysis to estimate the cost-benefit ratio of using different tillage techniques and residue management practices. Obtained results depicted that Zero tillage method have positive impacts on growth, yield and quality of wheat, Moreover, it is cost effective methodology. Similarly, Incorporation is suitable and beneficial method for soil due to more nutrients provision and reduce the need of fertilizers. Burning of rice stubbles has negative impact including air pollution, nutrient loss, microbes died and carbon loss. Recommended the zero tillage technology to reduce carbon losses along with food security in Pakistan.

Keywords: agricultural agronomy, food security, carbon sequestration, rice-wheat cropping system

Procedia PDF Downloads 262
36395 Design and Burnback Analysis of Three Dimensional Modified Star Grain

Authors: Almostafa Abdelaziz, Liang Guozhu, Anwer Elsayed

Abstract:

The determination of grain geometry is an important and critical step in the design of solid propellant rocket motor. In this study, the design process involved parametric geometry modeling in CAD, MATLAB coding of performance prediction and 2D star grain ignition experiment. The 2D star grain burnback achieved by creating new surface via each web increment and calculating geometrical properties at each step. The 2D star grain is further modified to burn as a tapered 3D star grain. Zero dimensional method used to calculate the internal ballistic performance. Experimental and theoretical results were compared in order to validate the performance prediction of the solid rocket motor. The results show that the usage of 3D grain geometry will decrease the pressure inside the combustion chamber and enhance the volumetric loading ratio.

Keywords: burnback analysis, rocket motor, star grain, three dimensional grains

Procedia PDF Downloads 218
36394 Machine Learning in Gravity Models: An Application to International Recycling Trade Flow

Authors: Shan Zhang, Peter Suechting

Abstract:

Predicting trade patterns is critical to decision-making in public and private domains, especially in the current context of trade disputes among major economies. In the past, U.S. recycling has relied heavily on strong demand for recyclable materials overseas. However, starting in 2017, a series of new recycling policies (bans and higher inspection standards) was enacted by multiple countries that were the primary importers of recyclables from the U.S. prior to that point. As the global trade flow of recycling shifts, some new importers, mostly developing countries in South and Southeast Asia, have been overwhelmed by the sheer quantities of scrap materials they have received. As the leading exporter of recyclable materials, the U.S. now has a pressing need to build its recycling industry domestically. With respect to the global trade in scrap materials used for recycling, the interest in this paper is (1) predicting how the export of recyclable materials from the U.S. might vary over time, and (2) predicting how international trade flows for recyclables might change in the future. Focusing on three major recyclable materials with a history of trade, this study uses data-driven and machine learning (ML) algorithms---supervised (shrinkage and tree methods) and unsupervised (neural network method)---to decipher the international trade pattern of recycling. Forecasting the potential trade values of recyclables in the future could help importing countries, to which those materials will shift next, to prepare related trade policies. Such policies can assist policymakers in minimizing negative environmental externalities and in finding the optimal amount of recyclables needed by each country. Such forecasts can also help exporting countries, like the U.S understand the importance of healthy domestic recycling industry. The preliminary result suggests that gravity models---in addition to particular selection macroeconomic predictor variables--are appropriate predictors of the total export value of recyclables. With the inclusion of variables measuring aspects of the political conditions (trade tariffs and bans), predictions show that recyclable materials are shifting from more policy-restricted countries to less policy-restricted countries in international recycling trade. Those countries also tend to have high manufacturing activities as a percentage of their GDP.

Keywords: environmental economics, machine learning, recycling, international trade

Procedia PDF Downloads 151
36393 Efficient Recommendation System for Frequent and High Utility Itemsets over Incremental Datasets

Authors: J. K. Kavitha, D. Manjula, U. Kanimozhi

Abstract:

Mining frequent and high utility item sets have gained much significance in the recent years. When the data arrives sporadically, incremental and interactive rule mining and utility mining approaches can be adopted to handle user’s dynamic environmental needs and avoid redundancies, using previous data structures, and mining results. The dependence on recommendation systems has exponentially risen since the advent of search engines. This paper proposes a model for building a recommendation system that suggests frequent and high utility item sets over dynamic datasets for a cluster based location prediction strategy to predict user’s trajectories using the Efficient Incremental Rule Mining (EIRM) algorithm and the Fast Update Utility Pattern Tree (FUUP) algorithm. Through comprehensive evaluations by experiments, this scheme has shown to deliver excellent performance.

Keywords: data sets, recommendation system, utility item sets, frequent item sets mining

Procedia PDF Downloads 277
36392 The Environmental and Economic Analysis of Extended Input-Output Table for Thailand’s Biomass Pellet Industry

Authors: Prangvalai Buasan, Boonrod Sajjakulnukit, Thongchart Bowonthumrongchai

Abstract:

The demand for biomass pellets in the industrial sector has significantly increased since 2020. The revised version of Thailand’s power development plan as well as the Alternative Energy Development Plan, aims to promote biomass fuel consumption by around 485 MW by 2030. The replacement of solid fossil fuel with biomass pellets will affect medium-term and long-term national benefits for all industries throughout the supply chain. Therefore, the evaluation of environmental and economic impacts throughout the biomass pellet supply chain needs to be performed to provide better insight into the goods and financial flow of this activity. This study extended the national input-output table for the biomass pellet industry and applied the input-output analysis (IOA) method, a sort of macroeconomic analysis, to interpret the result of transactions between industries in the monetary unit when the revised national power development plan was adopted and enforced. Greenhouse gas emissions from consuming energy and raw material through the supply chain are also evaluated. The total intermediate transactions of all economic sectors, which included the biomass pellets sector (CASE 2), increased by 0.02% when compared with the conservative case (CASE 1). The control total, which is the sum of total intermediate transactions and value-added, the control total of CASE 2 is increased by 0.07% when compared with CASE 1. The pellet production process emitted 432.26 MtCO2e per year. The major sharing of the GHG is from the plantation process of raw biomass.

Keywords: input-output analysis, environmental extended input-output analysis, macroeconomic planning, biomass pellets, renewable energy

Procedia PDF Downloads 83
36391 Experiences of Timing Analysis of Parallel Embedded Software

Authors: Muhammad Waqar Aziz, Syed Abdul Baqi Shah

Abstract:

The execution time analysis is fundamental to the successful design and execution of real-time embedded software. In such analysis, the Worst-Case Execution Time (WCET) of a program is a key measure, on the basis of which system tasks are scheduled. The WCET analysis of embedded software is also needed for system understanding and to guarantee its behavior. WCET analysis can be performed statically (without executing the program) or dynamically (through measurement). Traditionally, research on the WCET analysis assumes sequential code running on single-core platforms. However, as computation is steadily moving towards using a combination of parallel programs and multi-core hardware, new challenges in WCET analysis need to be addressed. In this article, we report our experiences of performing the WCET analysis of Parallel Embedded Software (PES) running on multi-core platform. The primary purpose was to investigate how WCET estimates of PES can be computed statically, and how they can be derived dynamically. Our experiences, as reported in this article, include the challenges we faced, possible suggestions to these challenges and the workarounds that were developed. This article also provides observations on the benefits and drawbacks of deriving the WCET estimates using the said methods and provides useful recommendations for further research in this area.

Keywords: embedded software, worst-case execution-time analysis, static flow analysis, measurement-based analysis, parallel computing

Procedia PDF Downloads 306
36390 Chemometric Regression Analysis of Radical Scavenging Ability of Kombucha Fermented Kefir-Like Products

Authors: Strahinja Kovacevic, Milica Karadzic Banjac, Jasmina Vitas, Stefan Vukmanovic, Radomir Malbasa, Lidija Jevric, Sanja Podunavac-Kuzmanovic

Abstract:

The present study deals with chemometric regression analysis of quality parameters and the radical scavenging ability of kombucha fermented kefir-like products obtained with winter savory (WS), peppermint (P), stinging nettle (SN) and wild thyme tea (WT) kombucha inoculums. Each analyzed sample was described by milk fat content (MF, %), total unsaturated fatty acids content (TUFA, %), monounsaturated fatty acids content (MUFA, %), polyunsaturated fatty acids content (PUFA, %), the ability of free radicals scavenging (RSA Dₚₚₕ, % and RSA.ₒₕ, %) and pH values measured after each hour from the start until the end of fermentation. The aim of the conducted regression analysis was to establish chemometric models which can predict the radical scavenging ability (RSA Dₚₚₕ, % and RSA.ₒₕ, %) of the samples by correlating it with the MF, TUFA, MUFA, PUFA and the pH value at the beginning, in the middle and at the end of fermentation process which lasted between 11 and 17 hours, until pH value of 4.5 was reached. The analysis was carried out applying univariate linear (ULR) and multiple linear regression (MLR) methods on the raw data and the data standardized by the min-max normalization method. The obtained models were characterized by very limited prediction power (poor cross-validation parameters) and weak statistical characteristics. Based on the conducted analysis it can be concluded that the resulting radical scavenging ability cannot be precisely predicted only on the basis of MF, TUFA, MUFA, PUFA content, and pH values, however, other quality parameters should be considered and included in the further modeling. This study is based upon work from project: Kombucha beverages production using alternative substrates from the territory of the Autonomous Province of Vojvodina, 142-451-2400/2019-03, supported by Provincial Secretariat for Higher Education and Scientific Research of AP Vojvodina.

Keywords: chemometrics, regression analysis, kombucha, quality control

Procedia PDF Downloads 122
36389 Using Machine Learning to Predict Answers to Big-Five Personality Questions

Authors: Aadityaa Singla

Abstract:

The big five personality traits are as follows: openness, conscientiousness, extraversion, agreeableness, and neuroticism. In order to get an insight into their personality, many flocks to these categories, which each have different meanings/characteristics. This information is important not only to individuals but also to career professionals and psychologists who can use this information for candidate assessment or job recruitment. The links between AI and psychology have been well studied in cognitive science, but it is still a rather novel development. It is possible for various AI classification models to accurately predict a personality question via ten input questions. This would contrast with the hundred questions that normal humans have to answer to gain a complete picture of their five personality traits. In order to approach this problem, various AI classification models were used on a dataset to predict what a user may answer. From there, the model's prediction was compared to its actual response. Normally, there are five answer choices (a 20% chance of correct guess), and the models exceed that value to different degrees, proving their significance. By utilizing an MLP classifier, decision tree, linear model, and K-nearest neighbors, they were able to obtain a test accuracy of 86.643, 54.625, 47.875, and 52.125, respectively. These approaches display that there is potential in the future for more nuanced predictions to be made regarding personality.

Keywords: machine learning, personally, big five personality traits, cognitive science

Procedia PDF Downloads 125