Search results for: Content mining
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2183

Search results for: Content mining

1103 Reutilization of Organic and Peat Soils by Deep Cement Mixing

Authors: Bee-Lin Tang, Ismail Bakar, Chee - Ming Chan

Abstract:

Limited infrastructure development on peats and organic soils is a serious geotechnical issues common to many countries of the world especially Malaysia which distributed 1.5 mill ha of those problematic soil. These soils have high water content and organic content which exhibit different mechanical properties and may also change chemically and biologically with time. Constructing structures on peaty ground involves the risk of ground failure and extreme settlement. Nowdays, much efforts need to be done in making peatlands usable for construction due to increased landuse. Deep mixing method employing cement as binders, is generally used as measure again peaty/ organic ground failure problem. Where the technique is widely adopted because it can improved ground considerably in a short period of time. An understanding of geotechnical properties as shear strength, stiffness and compressibility behavior of these soils was requires before continues construction on it. Therefore, 1- 1.5 meter peat soil sample from states of Johor and an organic soil from Melaka, Malaysia were investigated. Cement were added to the soil in the pre-mixing stage with water cement ratio at range 3.5,7,14,140 for peats and 5,10,30 for organic soils, essentially to modify the original soil textures and properties. The mixtures which in slurry form will pour to polyvinyl chloride (pvc) tube and cured at room temperature 250C for 7,14 and 28 days. Laboratory experiments were conducted including unconfined compressive strength and bender element , to monitor the improved strength and stiffness of the 'stabilised mixed soils'. In between, scanning electron miscroscopic (SEM) were observations to investigate changes in microstructures of stabilised soils and to evaluated hardening effect of a peat and organic soils stabilised cement. This preliminary effort indicated that pre-mixing peat and organic soils contributes in gaining soil strength while help the engineers to establish a new method for those problematic ground improvement in further practical and long term applications.

Keywords: peat soils, organic soils, cement stabilisation, strength, stiffness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3262
1102 Protein Secondary Structure Prediction Using Parallelized Rule Induction from Coverings

Authors: Leong Lee, Cyriac Kandoth, Jennifer L. Leopold, Ronald L. Frank

Abstract:

Protein 3D structure prediction has always been an important research area in bioinformatics. In particular, the prediction of secondary structure has been a well-studied research topic. Despite the recent breakthrough of combining multiple sequence alignment information and artificial intelligence algorithms to predict protein secondary structure, the Q3 accuracy of various computational prediction algorithms rarely has exceeded 75%. In a previous paper [1], this research team presented a rule-based method called RT-RICO (Relaxed Threshold Rule Induction from Coverings) to predict protein secondary structure. The average Q3 accuracy on the sample datasets using RT-RICO was 80.3%, an improvement over comparable computational methods. Although this demonstrated that RT-RICO might be a promising approach for predicting secondary structure, the algorithm-s computational complexity and program running time limited its use. Herein a parallelized implementation of a slightly modified RT-RICO approach is presented. This new version of the algorithm facilitated the testing of a much larger dataset of 396 protein domains [2]. Parallelized RTRICO achieved a Q3 score of 74.6%, which is higher than the consensus prediction accuracy of 72.9% that was achieved for the same test dataset by a combination of four secondary structure prediction methods [2].

Keywords: data mining, protein secondary structure prediction, parallelization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1596
1101 Study of Syntactic Errors for Deep Parsing at Machine Translation

Authors: Yukiko Sasaki Alam, Shahid Alam

Abstract:

Syntactic parsing is vital for semantic treatment by many applications related to natural language processing (NLP), because form and content coincide in many cases. However, it has not yet reached the levels of reliable performance. By manually examining and analyzing individual machine translation output errors that involve syntax as well as semantics, this study attempts to discover what is required for improving syntactic and semantic parsing.

Keywords: Machine translation, error analysis, syntactic errors, knowledge required for parsing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1247
1100 Designing a Framework for Network Security Protection

Authors: Eric P. Jiang

Abstract:

As the Internet continues to grow at a rapid pace as the primary medium for communications and commerce and as telecommunication networks and systems continue to expand their global reach, digital information has become the most popular and important information resource and our dependence upon the underlying cyber infrastructure has been increasing significantly. Unfortunately, as our dependency has grown, so has the threat to the cyber infrastructure from spammers, attackers and criminal enterprises. In this paper, we propose a new machine learning based network intrusion detection framework for cyber security. The detection process of the framework consists of two stages: model construction and intrusion detection. In the model construction stage, a semi-supervised machine learning algorithm is applied to a collected set of network audit data to generate a profile of normal network behavior and in the intrusion detection stage, input network events are analyzed and compared with the patterns gathered in the profile, and some of them are then flagged as anomalies should these events are sufficiently far from the expected normal behavior. The proposed framework is particularly applicable to the situations where there is only a small amount of labeled network training data available, which is very typical in real world network environments.

Keywords: classification, data analysis and mining, network intrusion detection, semi-supervised learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795
1099 Production of Pre-Reduction of Iron Ore Nuggets with Lesser Sulphur Intake by Devolatisation of Boiler Grade Coal

Authors: Chanchal Biswas, Anrin Bhattacharyya, Gopes Chandra Das, Mahua Ghosh Chaudhuri, Rajib Dey

Abstract:

Boiler coals with low fixed carbon and higher ash content have always challenged the metallurgists to develop a suitable method for their utilization. In the present study, an attempt is made to establish an energy effective method for the reduction of iron ore fines in the form of nuggets by using ‘Syngas’. By devolatisation (expulsion of volatile matter by applying heat) of boiler coal, gaseous product (enriched with reducing agents like CO, CO2, H2, and CH4 gases) is generated. Iron ore nuggets are reduced by this syngas. For that reason, there is no direct contact between iron ore nuggets and coal ash. It helps to control the minimization of the sulphur intake of the reduced nuggets. A laboratory scale devolatisation furnace designed with reduction facility is evaluated after in-depth studies and exhaustive experimentations including thermo-gravimetric (TG-DTA) analysis to find out the volatile fraction present in boiler grade coal, gas chromatography (GC) to find out syngas composition in different temperature and furnace temperature gradient measurements to minimize the furnace cost by applying one heating coil. The nuggets are reduced in the devolatisation furnace at three different temperatures and three different times. The pre-reduced nuggets are subjected to analytical weight loss calculations to evaluate the extent of reduction. The phase and surface morphology analysis of pre-reduced samples are characterized using X-ray diffractometry (XRD), energy dispersive x-ray spectrometry (EDX), scanning electron microscopy (SEM), carbon sulphur analyzer and chemical analysis method. Degree of metallization of the reduced nuggets is 78.9% by using boiler grade coal. The pre-reduced nuggets with lesser sulphur content could be used in the blast furnace as raw materials or coolant which would reduce the high quality of coke rate of the furnace due to its pre-reduced character. These can be used in Basic Oxygen Furnace (BOF) as coolant also.

Keywords: Alternative ironmaking, coal devolatisation, extent of reduction, nugget making, syngas based DRI, solid state reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487
1098 Moisture Diffusivity of AAC with Different Densities

Authors: Tomáš Korecký, Kamil Ďurana, Miroslava Lapková, Robert Černý

Abstract:

Method of determining of moisture diffusivity on two types of autoclaved aerated concretes with different bulk density is represented in the paper. On the specimens were measured one dimensional water transport only on liquid phase. Ever evaluation was done from moisture profiles measured in specific times by capacitance moisture meter. All values from capacitance meter were recalculated to moisture content by mass. Moisture diffusivity was determined in dependence on both moisture and temperature. The experiment temperatures were set at values 55, 65, 75 and 85°C.

Keywords: moisture diffusivity, autoclaved aerated concrete, capacitance moisture meter

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1830
1097 Optimal Conditions for Carotenoid Production and Antioxidation Characteristics by Rhodotorula rubra

Authors: N. Chanchay, S. Sirisansaneeyakul, C. Chaiyasut, N. Poosaran

Abstract:

This study aims to screen out and to optimize the major nutrients for maximum carotenoid production and antioxidation characteristics by Rhodotorula rubra. It was found that supplementary of 10 g/l glucose as carbon source, 1 g/l ammonium sulfate as nitrogen source and 1 g/l yeast extract as growth factor in the medium provided the better yield of carotenoid content of 30.39 μg/g cell dry weight the amount of antioxidation of Rhodotorula rubra by DPPH, ABTS and MDA method were 1.463%, 34.21% and 34.09 μmol/l, respectively.

Keywords: Carotenoid, Rhodotorula rubra, Antioxidation, DPPH, ABTS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2941
1096 Competitiveness of Animation Industry: The Case of Thailand

Authors: T. Niracharapa

Abstract:

The research studied and examined the competitiveness of the animation industry in Thailand. Data were collected based on articles, related reports and websites, news, research, and interviews of key persons from both public and private sectors. The diamond model was used to analyze the study. The major factor driving the Thai animation industry forward includes a quality workforce, their creativity and strong associations. However, discontinuity in government support, infrastructure, marketing, IP creation and financial constraints were factors keeping the Thai animation industry less competitive in the global market.

Keywords: Animation, competitiveness, digital content, Thailand.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4735
1095 Mixtures of Monotone Networks for Prediction

Authors: Marina Velikova, Hennie Daniels, Ad Feelders

Abstract:

In many data mining applications, it is a priori known that the target function should satisfy certain constraints imposed by, for example, economic theory or a human-decision maker. In this paper we consider partially monotone prediction problems, where the target variable depends monotonically on some of the input variables but not on all. We propose a novel method to construct prediction models, where monotone dependences with respect to some of the input variables are preserved by virtue of construction. Our method belongs to the class of mixture models. The basic idea is to convolute monotone neural networks with weight (kernel) functions to make predictions. By using simulation and real case studies, we demonstrate the application of our method. To obtain sound assessment for the performance of our approach, we use standard neural networks with weight decay and partially monotone linear models as benchmark methods for comparison. The results show that our approach outperforms partially monotone linear models in terms of accuracy. Furthermore, the incorporation of partial monotonicity constraints not only leads to models that are in accordance with the decision maker's expertise, but also reduces considerably the model variance in comparison to standard neural networks with weight decay.

Keywords: mixture models, monotone neural networks, partially monotone models, partially monotone problems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1247
1094 A Methodology for Investigating Public Opinion Using Multilevel Text Analysis

Authors: William Xiu Shun Wong, Myungsu Lim, Yoonjin Hyun, Chen Liu, Seongi Choi, Dasom Kim, Kee-Young Kwahk, Namgyu Kim

Abstract:

Recently, many users have begun to frequently share their opinions on diverse issues using various social media. Therefore, numerous governments have attempted to establish or improve national policies according to the public opinions captured from various social media. In this paper, we indicate several limitations of the traditional approaches to analyze public opinion on science and technology and provide an alternative methodology to overcome these limitations. First, we distinguish between the science and technology analysis phase and the social issue analysis phase to reflect the fact that public opinion can be formed only when a certain science and technology is applied to a specific social issue. Next, we successively apply a start list and a stop list to acquire clarified and interesting results. Finally, to identify the most appropriate documents that fit with a given subject, we develop a new logical filter concept that consists of not only mere keywords but also a logical relationship among the keywords. This study then analyzes the possibilities for the practical use of the proposed methodology thorough its application to discover core issues and public opinions from 1,700,886 documents comprising SNS, blogs, news, and discussions.

Keywords: Big data, social network analysis, text mining, topic modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1661
1093 Is E-learning Based On Learning Theories? A Literature Review

Authors: Apostolia Pange, Jenny Pange

Abstract:

E-learning aims to build knowledge and skills in order to enhance the quality of learning. Research has shown that the majority of the e-learning solutions lack in pedagogical background and present some serious deficiencies regarding teaching strategies and content delivery, time and pace management, interface design and preservation of learners- focus. The aim of this review is to approach the design of e-learning solutions with a pedagogical perspective and to present some good practices of e-learning design grounded on the core principles of Learning Theories (LTs).

Keywords: design principles, e-learning, Learning Theories

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5229
1092 Catalytic Gasification of Olive Mill Wastewater as a Biomass Source under Supercritical Conditions

Authors: Ekin Kıpçak, Mesut Akgün

Abstract:

Recently, a growing interest has emerged on the development of new and efficient energy sources, due to the inevitable extinction of the nonrenewable energy reserves. One of these alternative sources which have a great potential and sustainability to meet up the energy demand is biomass energy. This significant energy source can be utilized with various energy conversion technologies, one of which is biomass gasification in supercritical water.

Water, being the most important solvent in nature, has very important characteristics as a reaction solvent under supercritical circumstances. At temperatures above its critical point (374.8oC and 22.1MPa), water becomes more acidic and its diffusivity increases. Working with water at high temperatures increases the thermal reaction rate, which in consequence leads to a better dissolving of the organic matters and a fast reaction with oxygen. Hence, supercritical water offers a control mechanism depending on solubility, excellent transport properties based on its high diffusion ability and new reaction possibilities for hydrolysis or oxidation.

In this study the gasification of a real biomass, namely olive mill wastewater (OMW), in supercritical water conditions is investigated with the use of Ru/Al2O3 catalyst. OMW is a by-product obtained during olive oil production, which has a complex nature characterized by a high content of organic compounds and polyphenols. These properties impose OMW a significant pollution potential, but at the same time, the high content of organics makes OMW a desirable biomass candidate for energy production.

The catalytic gasification experiments were made with five different reaction temperatures (400, 450, 500, 550 and 600°C) and five reaction times (30, 60, 90, 120 and 150s), under a constant pressure of 25MPa. Through these experiments, the effects of reaction temperature and time on the gasification yield, gaseous product composition and OMW treatment efficiency were investigated.

Keywords: Catalyst, Gasification, Olive mill wastewater, Ru/Al2O3, Supercritical water.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2279
1091 M2LGP: Mining Multiple Level Gradual Patterns

Authors: Yogi Satrya Aryadinata, Anne Laurent, Michel Sala

Abstract:

Gradual patterns have been studied for many years as they contain precious information. They have been integrated in many expert systems and rule-based systems, for instance to reason on knowledge such as “the greater the number of turns, the greater the number of car crashes”. In many cases, this knowledge has been considered as a rule “the greater the number of turns → the greater the number of car crashes” Historically, works have thus been focused on the representation of such rules, studying how implication could be defined, especially fuzzy implication. These rules were defined by experts who were in charge to describe the systems they were working on in order to turn them to operate automatically. More recently, approaches have been proposed in order to mine databases for automatically discovering such knowledge. Several approaches have been studied, the main scientific topics being: how to determine what is an relevant gradual pattern, and how to discover them as efficiently as possible (in terms of both memory and CPU usage). However, in some cases, end-users are not interested in raw level knowledge, and are rather interested in trends. Moreover, it may be the case that no relevant pattern can be discovered at a low level of granularity (e.g. city), whereas some can be discovered at a higher level (e.g. county). In this paper, we thus extend gradual pattern approaches in order to consider multiple level gradual patterns. For this purpose, we consider two aggregation policies, namely horizontal and vertical.

Keywords: Gradual Pattern.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1500
1090 Education of Purchasing Professionals in Austria: Competence Based View

Authors: Volker Koch

Abstract:

This paper deals with the education of purchasing professionals in Austria. In this education, equivalent and measurable criteria are collected in order to create a comparison. The comparison shows the problem. To make the aforementioned comparison possible, methodologies such as KODE-Competence Atlas or presentations in a matrix form are used. The result shows the content taught and whether there are any similarities or interesting differences in the current Austrian purchasers’ formations. Purchasing professionals learning competencies are also illustrated in the study result.

Keywords: Competencies, education, purchasing professional, technological-oriented.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1060
1089 The Robust Clustering with Reduction Dimension

Authors: Dyah E. Herwindiati

Abstract:

A clustering is process to identify a homogeneous groups of object called as cluster. Clustering is one interesting topic on data mining. A group or class behaves similarly characteristics. This paper discusses a robust clustering process for data images with two reduction dimension approaches; i.e. the two dimensional principal component analysis (2DPCA) and principal component analysis (PCA). A standard approach to overcome this problem is dimension reduction, which transforms a high-dimensional data into a lower-dimensional space with limited loss of information. One of the most common forms of dimensionality reduction is the principal components analysis (PCA). The 2DPCA is often called a variant of principal component (PCA), the image matrices were directly treated as 2D matrices; they do not need to be transformed into a vector so that the covariance matrix of image can be constructed directly using the original image matrices. The decomposed classical covariance matrix is very sensitive to outlying observations. The objective of paper is to compare the performance of robust minimizing vector variance (MVV) in the two dimensional projection PCA (2DPCA) and the PCA for clustering on an arbitrary data image when outliers are hiden in the data set. The simulation aspects of robustness and the illustration of clustering images are discussed in the end of paper

Keywords: Breakdown point, Consistency, 2DPCA, PCA, Outlier, Vector Variance

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697
1088 Sliding Joints and Soil-Structure Interaction

Authors: Radim Cajka, Pavlina Mateckova, Martina Janulikova, Marie Stara

Abstract:

Use of a sliding joint is an effective method to decrease the stress in foundation structure where there is a horizontal deformation of subsoil (areas afflicted with underground mining) or horizontal deformation of a foundation structure (pre-stressed foundations, creep, shrinkage, temperature deformation). A convenient material for a sliding joint is a bitumen asphalt belt. Experiments for different types of bitumen belts were undertaken at the Faculty of Civil Engineering - VSB Technical University of Ostrava in 2008. This year an extension of the 2008 experiments is in progress and the shear resistance of a slide joint is being tested as a function of temperature in a temperature controlled room. In this paper experimental results of temperature dependant shear resistance are presented. The result of the experiments should be the sliding joint shear resistance as a function of deformation velocity and temperature. This relationship is used for numerical analysis of stress/strain relation between foundation structure and subsoil. Using a rheological slide joint could lead to a decrease of the reinforcement amount, and contribute to higher reliability of foundation structure and thus enable design of more durable and sustainable building structures.

Keywords: Pre-stressed foundations, sliding joint, soil-structure interaction, subsoil horizontal deformation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2016
1087 Evaluation of the Urban Regeneration Project: Land Use Transformation and SNS Big Data Analysis

Authors: Ju-Young Kim, Tae-Heon Moon, Jung-Hun Cho

Abstract:

Urban regeneration projects have been actively promoted in Korea. In particular, Jeonju Hanok Village is evaluated as one of representative cases in terms of utilizing local cultural heritage sits in the urban regeneration project. However, recently, there has been a growing concern in this area, due to the ‘gentrification’, caused by the excessive commercialization and surging tourists. This trend was changing land and building use and resulted in the loss of identity of the region. In this regard, this study analyzed the land use transformation between 2010 and 2016 to identify the commercialization trend in Jeonju Hanok Village. In addition, it conducted SNS big data analysis on Jeonju Hanok Village from February 14th, 2016 to March 31st, 2016 to identify visitors’ awareness of the village. The study results demonstrate that rapid commercialization was underway, unlikely the initial intention, so that planners and officials in city government should reconsider the project direction and rebuild deliberate management strategies. This study is meaningful in that it analyzed the land use transformation and SNS big data to identify the current situation in urban regeneration area. Furthermore, it is expected that the study results will contribute to the vitalization of regeneration area.

Keywords: Land use, SNS, text mining, urban regeneration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1215
1086 A Hybrid Feature Selection and Deep Learning Algorithm for Cancer Disease Classification

Authors: Niousha Bagheri Khulenjani, Mohammad Saniee Abadeh

Abstract:

Learning from very big datasets is a significant problem for most present data mining and machine learning algorithms. MicroRNA (miRNA) is one of the important big genomic and non-coding datasets presenting the genome sequences. In this paper, a hybrid method for the classification of the miRNA data is proposed. Due to the variety of cancers and high number of genes, analyzing the miRNA dataset has been a challenging problem for researchers. The number of features corresponding to the number of samples is high and the data suffer from being imbalanced. The feature selection method has been used to select features having more ability to distinguish classes and eliminating obscures features. Afterward, a Convolutional Neural Network (CNN) classifier for classification of cancer types is utilized, which employs a Genetic Algorithm to highlight optimized hyper-parameters of CNN. In order to make the process of classification by CNN faster, Graphics Processing Unit (GPU) is recommended for calculating the mathematic equation in a parallel way. The proposed method is tested on a real-world dataset with 8,129 patients, 29 different types of tumors, and 1,046 miRNA biomarkers, taken from The Cancer Genome Atlas (TCGA) database.

Keywords: Cancer classification, feature selection, deep learning, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1272
1085 Mercury Content in Edible Part of Otolithes Ruber Marketed in Hamedan, Iran

Authors: L. Tayebi, S. Sobhanardakani, A. Farmany, M. Cheraghi

Abstract:

In this research the level of mercury is analyzed in muscle tissue of Otolithes ruber retailed in Hamedan, Iran were determined by flame atomic absorption spectrometry after wet digestion. Analysis of mercury was carried out by spectrophotometrically. The average concentration of Hg in muscle tissue of Otolithes ruber was 0.030±0.026 -g/g so lower than to compare with the Maximum Allowable Concentration determined by FAO/WHO Codex Alimentarius Commission.

Keywords: mercury, Otolithes ruber, edible part, Hamedan

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
1084 CompPSA: A Component-Based Pairwise RNA Secondary Structure Alignment Algorithm

Authors: Ghada Badr, Arwa Alturki

Abstract:

The biological function of an RNA molecule depends on its structure. The objective of the alignment is finding the homology between two or more RNA secondary structures. Knowing the common functionalities between two RNA structures allows a better understanding and a discovery of other relationships between them. Besides, identifying non-coding RNAs -that is not translated into a protein- is a popular application in which RNA structural alignment is the first step A few methods for RNA structure-to-structure alignment have been developed. Most of these methods are partial structure-to-structure, sequence-to-structure, or structure-to-sequence alignment. Less attention is given in the literature to the use of efficient RNA structure representation and the structure-to-structure alignment methods are lacking. In this paper, we introduce an O(N2) Component-based Pairwise RNA Structure Alignment (CompPSA) algorithm, where structures are given as a component-based representation and where N is the maximum number of components in the two structures. The proposed algorithm compares the two RNA secondary structures based on their weighted component features rather than on their base-pair details. Extensive experiments are conducted illustrating the efficiency of the CompPSA algorithm when compared to other approaches and on different real and simulated datasets. The CompPSA algorithm shows an accurate similarity measure between components. The algorithm gives the flexibility for the user to align the two RNA structures based on their weighted features (position, full length, and/or stem length). Moreover, the algorithm proves scalability and efficiency in time and memory performance.

Keywords: Alignment, RNA secondary structure, pairwise, component-based, data mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 974
1083 Effects of Livestream Affordances on Consumer Purchase Willingness: Explicit IT Affordances Perspective

Authors: Isaac O. Asante, Yushi Jiang, Hailin Tao

Abstract:

Livestreaming marketing, the new electronic commerce element, has become an optional marketing channel following the COVID-19 pandemic, and many sellers are leveraging the features presented by livestreaming to increase sales. This study was conducted to measure real-time observable interactions between consumers and sellers. Based on the affordance theory, this study conceptualized constructs representing the interactive features and examined how they drive consumers’ purchase willingness during livestreaming sessions using 1238 datasets from Amazon Live, following the manual observation of transaction records. Using structural equation modeling, the ordinary least square regression suggests that live viewers, new followers, live chats, and likes positively affect purchase willingness. The Sobel and Monte Carlo tests show that new followers, live chats, and likes significantly mediate the relationship between live viewers and purchase willingness. The study presents a way of measuring interactions in livestreaming commerce and proposes a way to manually gather data on consumer behaviors in livestreaming platforms when the application programming interface (API) of such platforms does not support data mining algorithms.

Keywords: Livestreaming marketing, live chats, live viewers, likes, new followers, purchase willingness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 150
1082 Use of Semantic Networks as Learning Material and Evaluation of the Approach by Students

Authors: Philippe A. Martin

Abstract:

This article first summarizes reasons why current approaches supporting Open Learning and Distance Education need to be complemented by tools permitting lecturers, researchers and students to cooperatively organize the semantic content of Learning related materials (courses, discussions, etc.) into a fine-grained shared semantic network. This first part of the article also quickly describes the approach adopted to permit such a collaborative work. Then, examples of such semantic networks are presented. Finally, an evaluation of the approach by students is provided and analyzed.

Keywords: knowledge sharing, knowledge evaluation, e-learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1508
1081 Effect of Biostimulants to Control the Phelipanche ramosa L. Pomel in Processing Tomato Crop

Authors: G. Disciglio, G. Gatta, F. Lops, A. Libutti, A. Tarantino, E. Tarantino

Abstract:

The experimental trial was carried out in open field at Foggia district (Apulia Region, Southern Italy), during the spring-summer season 2014, in order to evaluate the effect of four biostimulant products (RadiconÒ, Viormon plusÒ, LysodinÒ and SiaptonÒ 10L), compared with a control (no biostimulant), on the infestation of processing tomato crop (cv Dres) by the chlorophyll-lacking root parasite Phelipanche ramosa. Biostimulants consist in different categories of products (microbial inoculants, humic and fulvic acids, hydrolyzed proteins and aminoacids, seaweed extracts) which play various roles in plant growing, including the improvement of crop resistance and quali-quantitative characteristics of yield. The experimental trial was arranged according to a complete randomized block design with five treatments, each of one replicated three times. The processing tomato seedlings were transplanted on 5 May 2014. Throughout the crop cycle, P. ramosa infestation was assessed according to the number of emerged shoots (branched plants) counted in each plot, at 66, 78 and 92 day after transplanting. The tomato fruits were harvested at full-stage of maturity on 8 August 2014. From each plot, the marketable yield was measured and the quali-quantitative yield parameters (mean weight, dry matter content, colour coordinate, colour index and soluble solids content of the fruits) were determined. The whole dataset was tested according to the basic assumptions for the analysis of variance (ANOVA) and the differences between the means were determined using Tukey’s tests at the 5% probability level. The results of the study showed that none of the applied biostimulants provided a whole control of Phelipanche, although some positive effects were obtained from their application. To this respect, the RadiconÒ appeared to be the most effective in reducing the infestation of this root-parasite in tomato crop. This treatment also gave the higher tomato yield.

Keywords: Biostimulants, control methods, Phelipanche ramosa, processing tomato crop.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1904
1080 Proposal of Solidification/Stabilisation Process of Chosen Hazardous Waste by Cementation

Authors: Bozena Dohnalkova

Abstract:

This paper presents a part of the project solving which is dedicated to the identification of the hazardous waste with the most critical production within the Czech Republic with the aim to study and find the optimal composition of the cement matrix that will ensure maximum content disposal of chosen hazardous waste. In the first stage of project solving – which represents this paper – a specific hazardous waste was chosen, its properties were identified and suitable solidification agents were chosen. Consequently solidification formulas and testing methodology was proposed.

Keywords: Cementation, solidification, waste, binder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1869
1079 Development of Mechanical Properties of Self Compacting Concrete Contain Rice Husk Ash

Authors: M. A. Ahmadi, O. Alidoust, I. Sadrinejad, M. Nayeri

Abstract:

Self-compacting concrete (SCC), a new kind of high performance concrete (HPC) have been first developed in Japan in 1986. The development of SCC has made casting of dense reinforcement and mass concrete convenient, has minimized noise. Fresh self-compacting concrete (SCC) flows into formwork and around obstructions under its own weight to fill it completely and self-compact (without any need for vibration), without any segregation and blocking. The elimination of the need for compaction leads to better quality concrete and substantial improvement of working conditions. SCC mixes generally have a much higher content of fine fillers, including cement, and produce excessively high compressive strength concrete, which restricts its field of application to special concrete only. To use SCC mixes in general concrete construction practice, requires low cost materials to make inexpensive concrete. Rice husk ash (RHA) has been used as a highly reactive pozzolanic material to improve the microstructure of the interfacial transition zone (ITZ) between the cement paste and the aggregate in self compacting concrete. Mechanical experiments of RHA blended Portland cement concretes revealed that in addition to the pozzolanic reactivity of RHA (chemical aspect), the particle grading (physical aspect) of cement and RHA mixtures also exerted significant influences on the blending efficiency. The scope of this research was to determine the usefulness of Rice husk ash (RHA) in the development of economical self compacting concrete (SCC). The cost of materials will be decreased by reducing the cement content by using waste material like rice husk ash instead of. This paper presents a study on the development of Mechanical properties up to 180 days of self compacting and ordinary concretes with rice-husk ash (RHA), from a rice paddy milling industry in Rasht (Iran). Two different replacement percentages of cement by RHA, 10%, and 20%, and two different water/cementicious material ratios (0.40 and 0.35), were used for both of self compacting and normal concrete specimens. The results are compared with those of the self compacting concrete without RHA, with compressive, flexural strength and modulus of elasticity. It is concluded that RHA provides a positive effect on the Mechanical properties at age after 60 days. Base of the result self compacting concrete specimens have higher value than normal concrete specimens in all test except modulus of elasticity. Also specimens with 20% replacement of cement by RHA have the best performance.

Keywords: Self compacting concrete (SCC), Rice husk ash(RHA), Mechanical properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3680
1078 Development of Prediction Models of Day-Ahead Hourly Building Electricity Consumption and Peak Power Demand Using the Machine Learning Method

Authors: Dalin Si, Azizan Aziz, Bertrand Lasternas

Abstract:

To encourage building owners to purchase electricity at the wholesale market and reduce building peak demand, this study aims to develop models that predict day-ahead hourly electricity consumption and demand using artificial neural network (ANN) and support vector machine (SVM). All prediction models are built in Python, with tool Scikit-learn and Pybrain. The input data for both consumption and demand prediction are time stamp, outdoor dry bulb temperature, relative humidity, air handling unit (AHU), supply air temperature and solar radiation. Solar radiation, which is unavailable a day-ahead, is predicted at first, and then this estimation is used as an input to predict consumption and demand. Models to predict consumption and demand are trained in both SVM and ANN, and depend on cooling or heating, weekdays or weekends. The results show that ANN is the better option for both consumption and demand prediction. It can achieve 15.50% to 20.03% coefficient of variance of root mean square error (CVRMSE) for consumption prediction and 22.89% to 32.42% CVRMSE for demand prediction, respectively. To conclude, the presented models have potential to help building owners to purchase electricity at the wholesale market, but they are not robust when used in demand response control.

Keywords: Building energy prediction, data mining, demand response, electricity market.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2205
1077 Impact of Positive Psychology Education and Interventions on Well-Being: A Study of Students Engaged in Pastoral Care

Authors: Inna R. Edara, Haw-Lin Wu

Abstract:

Positive psychology investigates human strengths and virtues and promotes well-being. Relying on this assumption, positive interventions have been continuously designed to build pleasure and happiness, joy and contentment, engagement and meaning, hope and optimism, satisfaction and gratitude, spirituality, and various other positive measures of well-being. In line with this model of positive psychology and interventions, this study investigated certain measures of well-being in a group of 45 students enrolled in an 18-week positive psychology course and simultaneously engaged in service-oriented interventions that they chose for themselves based on the course content and individual interests. Students’ well-being was measured at the beginning and end of the course. The well-being indicators included positive automatic thoughts, optimism and hope, satisfaction with life, and spirituality. A paired-samples t-test conducted to evaluate the impact of class content and service-oriented interventions on students’ scores of well-being indicators indicated statistically significant increase from pre-class to post-class scores. There were also significant gender differences in post-course well-being scores, with females having higher levels of well-being than males. A two-way between groups analysis of variance indicated a significant interaction effect of age by gender on the post-course well-being scores, with females in the age group of 56-65 having the highest scores of well-being in comparison to the males in the same age group. Regression analyses indicated that positive automatic thought significantly predicted hope and satisfaction with life in the pre-course analysis. In the post-course regression analysis, spiritual transcendence made a significant contribution to optimism, and positive automatic thought made a significant contribution to both hope and satisfaction with life. Finally, a significant test between pre-course and post-course regression coefficients indicated that the regression coefficients at pre-course were significantly different from post-course coefficients, suggesting that the positive psychology course and the interventions were helpful in raising the levels of well-being. The overall results suggest a substantial increase in the participants’ well-being scores after engaging in the positive-oriented interventions, implying a need for designing more positive interventions in education to promote well-being.  

Keywords: Hope, optimism, positive automatic thoughts, satisfaction with life, spirituality, well-being.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 966
1076 The Impact of Protein Content on Athletes’ Body Composition

Authors: G. Vici, L. Cesanelli, L. Belli, R. Ceci, V. Polzonetti

Abstract:

Several factors contribute to success in sport and diet is one of them. Evidence-based sport nutrition guidelines underline the importance of macro- and micro-nutrients’ balance and timing in order to improve athlete’s physical status and performance. Nevertheless, a high content of proteins is commonly found in resistance training athletes’ diet with carbohydrate intake that is not enough or not well planned. The aim of the study was to evaluate the impact of different protein and carbohydrate diet contents on body composition and sport performance on a group of resistance training athletes. Subjects were divided as study group (n=16) and control group (n=14). For a period of 4 months, both groups were subjected to the same resistance training fitness program with study group following a specific diet and control group following an ab libitum diet. Body compositions were evaluated trough anthropometric measurement (weight, height, body circumferences and skinfolds) and Bioimpedence Analysis. Physical strength and training status of individuals were evaluated through the One Repetition Maximum test (RM1). Protein intake in studied group was found to be lower than in control group. There was a statistically significant increase of body weight, free fat mass and body mass cell of studied group respect to the control group. Fat mass remains almost constant. Statistically significant changes were observed in quadriceps and biceps circumferences, with an increase in studied group. The MR1 test showed improvement in study group’s strength but no changes in control group. Usually people consume hyper-proteic diet to achieve muscle mass development. Through this study, it was possible to show that protein intake fixed at 1,7 g/kg/d can meet the individual's needs. In parallel, the increased intake of carbohydrates, focusing on quality and timing of assumption, has enabled the obtainment of desired results with a training protocol supporting a hypertrophic strategy. Therefore, the key point seems related to the planning of a structured program both from a nutritional and training point of view.

Keywords: Body composition, diet, exercise, protein.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1076
1075 Rethinking the Languages for Specific Purposes Syllabus in the 21st Century: Topic-Centered or Skills-Centered

Authors: A. Knezović

Abstract:

21st century has transformed the labor market landscape in a way of posing new and different demands on university graduates as well as university lecturers, which means that the knowledge and academic skills students acquire in the course of their studies should be applicable and transferable from the higher education context to their future professional careers. Given the context of the Languages for Specific Purposes (LSP) classroom, the teachers’ objective is not only to teach the language itself, but also to prepare students to use that language as a medium to develop generic skills and competences. These include media and information literacy, critical and creative thinking, problem-solving and analytical skills, effective written and oral communication, as well as collaborative work and social skills, all of which are necessary to make university graduates more competitive in everyday professional environments. On the other hand, due to limitations of time and large numbers of students in classes, the frequently topic-centered syllabus of LSP courses places considerable focus on acquiring the subject matter and specialist vocabulary instead of sufficient development of skills and competences required by students’ prospective employers. This paper intends to explore some of those issues as viewed both by LSP lecturers and by business professionals in their respective surveys. The surveys were conducted among more than 50 LSP lecturers at higher education institutions in Croatia, more than 40 HR professionals and more than 60 university graduates with degrees in economics and/or business working in management positions in mainly large and medium-sized companies in Croatia. Various elements of LSP course content have been taken into consideration in this research, including reading and listening comprehension of specialist texts, acquisition of specialist vocabulary and grammatical structures, as well as presentation and negotiation skills. The ability to hold meetings, conduct business correspondence, write reports, academic texts, case studies and take part in debates were also taken into consideration, as well as informal business communication, business etiquette and core courses delivered in a foreign language. The results of the surveys conducted among LSP lecturers will be analyzed with reference to what extent those elements are included in their courses and how consistently and thoroughly they are evaluated according to their course requirements. Their opinions will be compared to the results of the surveys conducted among professionals from a range of industries in Croatia so as to examine how useful and important they perceive the same elements of the LSP course content in their working environments. Such comparative analysis will thus show to what extent the syllabi of LSP courses meet the demands of the employment market when it comes to the students’ language skills and competences, as well as transferable skills. Finally, the findings will also be compared to the observations based on practical teaching experience and the relevant sources that have been used in this research. In conclusion, the ideas and observations in this paper are merely open-ended questions that do not have conclusive answers, but might prompt LSP lecturers to re-evaluate the content and objectives of their course syllabi.

Keywords: Languages for specific purposes (LSP), language skills, topic-centered syllabus, transferable skills.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2874
1074 Evaluation of Video Quality Metrics and Performance Comparison on Contents Taken from Most Commonly Used Devices

Authors: Pratik Dhabal Deo, Manoj P.

Abstract:

With the increasing number of social media users, the amount of video content available has also significantly increased. Currently, the number of smartphone users is at its peak, and many are increasingly using their smartphones as their main photography and recording devices. There have been a lot of developments in the field of video quality assessment in since the past years and more research on various other aspects of video and image are being done. Datasets that contain a huge number of videos from different high-end devices make it difficult to analyze the performance of the metrics on the content from most used devices even if they contain contents taken in poor lighting conditions using lower-end devices. These devices face a lot of distortions due to various factors since the spectrum of contents recorded on these devices is huge. In this paper, we have presented an analysis of the objective Video Quality Analysis (VQA) metrics on contents taken only from most used devices and their performance on them, focusing on full-reference metrics. To carry out this research, we created a custom dataset containing a total of 90 videos that have been taken from three most commonly used devices, and Android smartphone, an iOS smartphone and a Digital Single-Lens Reflex (DSLR) camera. On the videos taken on each of these devices, the six most common types of distortions that users face have been applied in addition to already existing H.264 compression based on four reference videos. These six applied distortions have three levels of degradation each. A total of the five most popular VQA metrics have been evaluated on this dataset and the highest values and the lowest values of each of the metrics on the distortions have been recorded. Finally, it is found that blur is the artifact on which most of the metrics did not perform well. Thus, in order to understand the results better the amount of blur in the data set has been calculated and an additional evaluation of the metrics was done using High Efficiency Video Coding (HEVC) codec, which is the next version of H.264 compression, on the camera that proved to be the sharpest among the devices. The results have shown that as the resolution increases, the performance of the metrics tends to become more accurate and the best performing metric among them is VQM with very few inconsistencies and inaccurate results when the compression applied is H.264, but when the compression is applied is HEVC, Structural Similarity (SSIM) metric and Video Multimethod Assessment Fusion (VMAF) have performed significantly better.

Keywords: Distortion, metrics, recording, frame rate, video quality assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 366