Search results for: cluster model approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 27466

Search results for: cluster model approach

27226 Effects of a Cluster Grouping of Gifted and Twice Exceptional Students on Academic Motivation, Socio-emotional Adjustment, and Life Satisfaction

Authors: Line Massé, Claire Baudry, Claudia Verret, Marie-France Nadeau, Anne Brault-Labbé

Abstract:

Little research has been conducted on educational services adapted for twice exceptional students. Within an action research, a cluster grouping was set up in an elementary school in Quebec, bringing together gifted or doubly exceptional (2E) students (n = 11) and students not identified as gifted (n = 8) within a multilevel class (3ᵣ𝒹 and 4ₜₕ years). 2E students had either attention deficit hyperactivity disorder (n = 8, including 3 with specific learning disability) or autism spectrum disorder (n = 2). Differentiated instructions strategies were implemented, including the possibility of progressing at their own pace of learning, independent study or research projects, flexible accommodation, tutoring with older students and the development of socio-emotional learning. A specialized educator also supported the teacher in the class for behavioural and socio-affective aspects. Objectives: The study aimed to assess the impacts of the grouping on all students, their academic motivation, and their socio-emotional adaptation. Method: A mixed method was used, combining a qualitative approach with a quantitative approach. Semi-directed interviews were conducted with students (N = 18, 4 girls and 14 boys aged 8 to 9) and one of their parents (N = 18) at the end of the school year. Parents and students completed two questionnaires at the beginning and end of the school year: the Behavior Assessment System for Children-3, children or parents versions (BASC-3, Reynolds and Kampus, 2015) and the Academic Motivation in Education (Vallerand et al., 1993). Parents also completed the Multidimensional Student Life Satisfaction Scale (Huebner, 1994, adapted by Fenouillet et al., 2014) comprising three domains (school, friendships, and motivation). Mixed thematic analyzes were carried out on the data from the interviews using the N'Vivo software. Related-samples Wilcoxon rank-sums tests were conducted for the data from the questionnaires. Results: Different themes emerge from the students' comments, including a positive impact on school motivation or attitude toward school, improved school results, reduction of their behavioural difficulties and improvement of their social relations. These remarks were more frequent among 2E students. Most 2E students also noted an improvement in their academic performance. Most parents reported improvements in attitudes toward school and reductions in disruptive behaviours in the classroom. Some parents also observed changes in behaviours at home or in the socio-emotional well-being of their children, here again, particularly parents of 2E children. Analysis of questionnaires revealed significant differences at the end of the school year, more specifically pertaining to extrinsic motivation identified, problems of conduct, attention, emotional self-control, executive functioning, negative emotions, functional deficiencies, and satisfaction regarding friendships. These results indicate that this approach could benefit not only gifted and doubly exceptional students but also students not identified as gifted.

Keywords: Cluster grouping, elementary school, giftedness, mixed methods, twice exceptional students

Procedia PDF Downloads 74
27225 Identification of Dynamic Friction Model for High-Precision Motion Control

Authors: Martin Goubej, Tomas Popule, Alois Krejci

Abstract:

This paper deals with experimental identification of mechanical systems with nonlinear friction characteristics. Dynamic LuGre friction model is adopted and a systematic approach to parameter identification of both linear and nonlinear subsystems is given. The identification procedure consists of three subsequent experiments which deal with the individual parts of plant dynamics. The proposed method is experimentally verified on an industrial-grade robotic manipulator. Model fidelity is compared with the results achieved with a static friction model.

Keywords: mechanical friction, LuGre model, friction identification, motion control

Procedia PDF Downloads 412
27224 The Effect of Hypertrophy Strength Training Using Traditional Set vs. Cluster Set on Maximum Strength and Sprinting Speed

Authors: Bjornar Kjellstadli, Shaher A. I. Shalfawi

Abstract:

The aim of this study was to investigate the effect of strength training Cluster set-method compared to traditional set-method 30 m sprinting time and maximum strength in squats and bench-press. Thirteen Physical Education students, 7 males and 6 females between the age of 19-28 years old were recruited. The students were random divided in three groups. Traditional set group (TSG) consist of 2 males and 2 females aged (±SD) (22.3 ± 1.5 years), body mass (79.2 ± 15.4 kg) and height (177.5 ± 11.3 cm). Cluster set group (CSG) consist of 3 males and 2 females aged (22.4 ± 3.29 years), body mass (81.0 ± 24.0 kg) and height (179.2 ± 11.8 cm) and a control group (CG) consist of 2 males and 2 females aged (21.5 ± 2.4 years), body mass (82.1 ± 17.4 kg) and height (175.5 ± 6.7 cm). The intervention consisted of performing squat and bench press at 70% of 1RM (twice a week) for 8 weeks using 10 repetition and 4 sets. Two types of strength-training methods were used , cluster set (CS) where the participants (CSG) performed 2 reps 5 times with a 10 s recovery in between reps and 50 s recovery between sets, and traditional set (TS) where the participants (TSG) performed 10 reps each set with 90 s recovery in between sets. The pre-tests and post-tests conducted were 1 RM in both squats and bench press, and 10 and 30 m sprint time. The 1RM test were performed with Eleiko XF barbell (20 kg), Eleiko weight plates, rack and bench from Hammerstrength. The speed test was measured with the Brower speed trap II testing system (Brower Timing Systems, Utah, USA). The participants received an individualized training program based on the pre-test of the 1RM. In addition, a mid-term test of 1RM was carried out to adjust training intensity. Each training session were supervised by the researchers. Beast sensors (Milano, Italy) were also used to monitor and quantify the training load for the participants. All groups had a statistical significant improvement in bench press 1RM (TSG 1RM from 56.3 ± 28.9 to 66 ± 28.5 kg; CSG 1RM from 69.8 ± 33.5 to 77.2 ± 34.1 kg and CG 1RM from 67.8 ± 26.6 to 72.2 ± 29.1 kg), whereas only the TSG (1RM from 84.3 ± 26.8 to 114.3 ± 26.5 kg) and CSG (1RM from 100.4 ± 33.9 to 129 ± 35.1 kg) had a statistical significant improvement in Squats 1RM (P < 0.05). However, a between groups examination reveals that there were no marked differences in 1RM squat performance between TSG and CSG (P > 0.05) and both groups had a marked improvements compared to the CG (P < 0.05). On the other hand, no differences between groups were observed in Bench press 1RM. The within groups results indicate that none of the groups had any marked improvement in the distances from 0-10 m and 10-30 m except the CSG which had a notable improvement in the distance from 10-30 m (-0.07 s; P < 0.05). Furthermore, no differences in sprinting abilities were observed between groups. The results from this investigation indicate that traditional set strength training at 70% of 1RM gave close results compared to Cluster set strength training at the same intensity. However, the results indicate that the cluster set had an effect on flying time (10-30 m) indicating that the velocity at which those repetitions were performed could be the explanation factor of this this improvement.

Keywords: physical performance, 1RM, pushing velocity, velocity based training

Procedia PDF Downloads 164
27223 Introducing a Practical Model for Instructional System Design Based on Determining of the knowledge Level of the Organization: Case Study of Isfahan Public Transportation Co.

Authors: Mojtaba Aghajari, Alireza Aghasi

Abstract:

The first challenge which the current research faced has been the identification or determination of the level of knowledge in Isfahan public transportation corporation, and the second challenge has been the recognition and choice of a proper approach for the instructional system design. Responding these two challenges will present an appropriate model of instructional system design. In order to respond the first challenge or question, Nonaka and Takeuchi KM model has been utilized due to its universality among the 26 models proposed so far. The statistical population of this research included 2200 people, among which 200 persons were chosen as the sample of the research by the use of Morgan’s method. The data gathering has been carried out by the means of a questionnaire based on Nonaka and Takeuchi KM model, analysis of which has been done by SPSS program. The output of this questionnaire, yielding the point of 1.96 (out of 5 points), revealed that the general condition of Isfahan public transportation corporation is weak concerning its being knowledge-centered. After placing this output on Jonassen’s continuum, it was revealed that the appropriate approach for instructional system design is the system (or behavioral) approach. Accordingly, different steps of the general model of ADDIE, which covers all of the ISO10015 standards, were adopted in the act of designing. Such process in Isfahan public transportation corporation was designed and divided into three main steps, including: instructional designing and planning, instructional course planning, determination of the evaluation and the effectiveness of the instructional courses.

Keywords: instructional system design, system approach, knowledge management, employees

Procedia PDF Downloads 324
27222 The Relationship Between Car Drivers' Background Information and Risky Events In I- Dreams Project

Authors: Dagim Dessalegn Haile

Abstract:

This study investigated the interaction between the drivers' socio-demographic background information (age, gender, and driving experience) and the risky events score in the i-DREAMS platform. Further, the relationship between the participants' background driving behavior and the i-DREAMS platform behavioral output scores of risky events was also investigated. The i-DREAMS acronym stands for Smart Driver and Road Environment Assessment and Monitoring System. It is a European Union Horizon 2020 funded project consisting of 13 partners, researchers, and industry partners from 8 countries. A total of 25 Belgian car drivers (16 male and nine female) were considered for analysis. Drivers' ages were categorized into ages 18-25, 26-45, 46-65, and 65 and older. Drivers' driving experience was also categorized into four groups: 1-15, 16-30, 31-45, and 46-60 years. Drivers are classified into two clusters based on the recorded score for risky events during phase 1 (baseline) using risky events; acceleration, deceleration, speeding, tailgating, overtaking, and lane discipline. Agglomerative hierarchical clustering using SPSS shows Cluster 1 drivers are safer drivers, and Cluster 2 drivers are identified as risky drivers. The analysis result indicated no significant relationship between age groups, gender, and experience groups except for risky events like acceleration, tailgating, and overtaking in a few phases. This is mainly because the fewer participants create less variability of socio-demographic background groups. Repeated measure ANOVA shows that cluster 2 drivers improved more than cluster 1 drivers for tailgating, lane discipline, and speeding events. A positive relationship between background drivers' behavior and i-DREAMS platform behavioral output scores is observed. It implies that car drivers who in the questionnaire data indicate committing more risky driving behavior demonstrate more risky driver behavior in the i-DREAMS observed driving data.

Keywords: i-dreams, car drivers, socio-demographic background, risky events

Procedia PDF Downloads 68
27221 Analysis of Ozone Episodes in the Forest and Vegetation Areas with Using HYSPLIT Model: A Case Study of the North-West Side of Biga Peninsula, Turkey

Authors: Deniz Sari, Selahattin İncecik, Nesimi Ozkurt

Abstract:

Surface ozone, which named as one of the most critical pollutants in the 21th century, threats to human health, forest and vegetation. Specifically, in rural areas surface ozone cause significant influences on agricultural productions and trees. In this study, in order to understand to the surface ozone levels in rural areas we focus on the north-western side of Biga Peninsula which covers by the mountainous and forested area. Ozone concentrations were measured for the first time with passive sampling at 10 sites and two online monitoring stations in this rural area from 2013 and 2015. Using with the daytime hourly O3 measurements during light hours (08:00–20:00) exceeding the threshold of 40 ppb over the 3 months (May, June and July) for agricultural crops, and over the six months (April to September) for forest trees AOT40 (Accumulated hourly O3 concentrations Over a Threshold of 40 ppb) cumulative index was calculated. AOT40 is defined by EU Directive 2008/50/EC to evaluate whether ozone pollution is a risk for vegetation, and is calculated by using hourly ozone concentrations from monitoring systems. In the present study, we performed the trajectory analysis by The Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model to follow the long-range transport sources contributing to the high ozone levels in the region. The ozone episodes observed between 2013 and 2015 were analysed using the HYSPLIT model developed by the NOAA-ARL. In addition, the cluster analysis is used to identify homogeneous groups of air mass transport patterns can be conducted through air trajectory clustering by grouping similar trajectories in terms of air mass movement. Backward trajectories produced for 3 years by HYSPLIT model were assigned to different clusters according to their moving speed and direction using a k-means clustering algorithm. According to cluster analysis results, northerly flows to study area cause to high ozone levels in the region. The results present that the ozone values in the study area are above the critical levels for forest and vegetation based on EU Directive 2008/50/EC.

Keywords: AOT40, Biga Peninsula, HYSPLIT, surface ozone

Procedia PDF Downloads 254
27220 BIM Model and Virtual Prototyping in Construction Management

Authors: Samar Alkindy

Abstract:

Purpose: The BIM model has been used to support the planning of different construction projects in the industry by showing the different stages of the construction process. The model has been instrumental in identifying some of the common errors in the construction process through the spatial arrangement. The continuous use of the BIM model in the construction industry has resulted in various radical changes such as virtual prototyping. Construction virtual prototyping is a highly advanced technology that incorporates a BIM model with realistic graphical simulations, and facilitates the simulation of the project before a product is built in the factory. The paper presents virtual prototyping in the construction industry by examining its application, challenges and benefits to a construction project. Methodology approach: A case study was conducted for this study in four major construction projects, which incorporate virtual construction prototyping in several stages of the construction project. Furthermore, there was the administration of interviews with the project manager and engineer and the planning manager. Findings: Data collected from the methodological approach shows a positive response for virtual construction prototyping in construction, especially concerning communication and visualization. Furthermore, the use of virtual prototyping has increased collaboration and efficiency between construction experts handling a project. During the planning stage, virtual prototyping has increased accuracy, reduced planning time, and reduced the amount of rework during the implementation stage. Irrespective of virtual prototyping being a new concept in the construction industry, the findings outline that the approach will benefit the management of construction projects.

Keywords: construction operations, construction planning, process simulation, virtual prototyping

Procedia PDF Downloads 230
27219 Micromechanical Modelling of Ductile Damage with a Cohesive-Volumetric Approach

Authors: Noe Brice Nkoumbou Kaptchouang, Pierre-Guy Vincent, Yann Monerie

Abstract:

The present work addresses the modelling and the simulation of crack initiation and propagation in ductile materials which failed by void nucleation, growth, and coalescence. One of the current research frameworks on crack propagation is the use of cohesive-volumetric approach where the crack growth is modelled as a decohesion of two surfaces in a continuum material. In this framework, the material behavior is characterized by two constitutive relations, the volumetric constitutive law relating stress and strain, and a traction-separation law across a two-dimensional surface embedded in the three-dimensional continuum. Several cohesive models have been proposed for the simulation of crack growth in brittle materials. On the other hand, the application of cohesive models in modelling crack growth in ductile material is still a relatively open field. One idea developed in the literature is to identify the traction separation for ductile material based on the behavior of a continuously-deforming unit cell failing by void growth and coalescence. Following this method, the present study proposed a semi-analytical cohesive model for ductile material based on a micromechanical approach. The strain localization band prior to ductile failure is modelled as a cohesive band, and the Gurson-Tvergaard-Needleman plasticity model (GTN) is used to model the behavior of the cohesive band and derived a corresponding traction separation law. The numerical implementation of the model is realized using the non-smooth contact method (NSCD) where cohesive models are introduced as mixed boundary conditions between each volumetric finite element. The present approach is applied to the simulation of crack growth in nuclear ferritic steel. The model provides an alternative way to simulate crack propagation using the numerical efficiency of cohesive model with a traction separation law directly derived from porous continuous model.

Keywords: ductile failure, cohesive model, GTN model, numerical simulation

Procedia PDF Downloads 148
27218 Subsidiary Strategy and Importance of Standards: Re-Interpreting the Integration-Responsiveness Framework

Authors: Jo-Ann Müller

Abstract:

The integration-responsiveness (IR) framework presents four distinct internationalization strategies which differ depending on the extent of pressure the company faces for local responsiveness and global integration. This study applies the framework to standards by examining differences in the relative importance of three types of standards depending on the role the subsidiary plays within the corporate group. Hypotheses are tested empirically in a two-stage procedure. First, the subsidiaries are grouped performing cluster analysis. In the second step, the relationship between cluster affiliation and subsidiary strategy is tested using multinomial Probit estimation. While the level of local responsiveness of a firm relates to the relative importance of national and international formal standards, the degree of vertical integration is associated with the application of internal company.

Keywords: FDI, firm-level data, standards, subsidiary strategy

Procedia PDF Downloads 286
27217 The Study of Effect the Number of Cluster in the Branch on Vegetative Characteristics of Pistacia vera

Authors: Seyeh Hassan Eftekhar Afzali, Hamid Mohammadi

Abstract:

Pistachio is like almond but the second cycle of growth (third phase) has rather fast growth. This is caused to add final mass of product. When the germ grows, it and its cover are reached to the final size during six week period. As starting the second phase, the lignifications of pericarp is begun and continued for 4 or 6 weeks. Physiological maturity or easy separation of green from scutum is specified. This test was done according to random blocks of 6 orchards in the type of Ahmad Aghaie with 4 iterations. Vegetative properties of branch are investigated. The results of the bunch numbers on the growth of branch in current year are shown that the most growth of branch is happened by trimming of one and two bunches of the branch and the most diameter of the branch is happened by trimming of one to four bunches of branch. Trimming of a bunch is caused the most number of pistachio products in the bunch.

Keywords: pistachio, cluster, bud, fruit, branch

Procedia PDF Downloads 475
27216 Closest Possible Neighbor of a Different Class: Explaining a Model Using a Neighbor Migrating Generator

Authors: Hassan Eshkiki, Benjamin Mora

Abstract:

The Neighbor Migrating Generator is a simple and efficient approach to finding the closest potential neighbor(s) with a different label for a given instance and so without the need to calibrate any kernel settings at all. This allows determining and explaining the most important features that will influence an AI model. It can be used to either migrate a specific sample to the class decision boundary of the original model within a close neighborhood of that sample or identify global features that can help localising neighbor classes. The proposed technique works by minimizing a loss function that is divided into two components which are independently weighted according to three parameters α, β, and ω, α being self-adjusting. Results show that this approach is superior to past techniques when detecting the smallest changes in the feature space and may also point out issues in models like over-fitting.

Keywords: explainable AI, EX AI, feature importance, counterfactual explanations

Procedia PDF Downloads 189
27215 Estimation of a Finite Population Mean under Random Non Response Using Improved Nadaraya and Watson Kernel Weights

Authors: Nelson Bii, Christopher Ouma, John Odhiambo

Abstract:

Non-response is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random non-response using auxiliary data. In this study, it is assumed that random non-response occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random non-response. In particular, the auxiliary information is used via an improved Nadaraya-Watson kernel regression technique to compensate for random non-response. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at a 95% coverage rate. The results obtained in this study are useful, for instance, in choosing efficient estimators of the finite population mean in demographic sample surveys.

Keywords: mean squared error, random non-response, two-stage cluster sampling, confidence interval lengths

Procedia PDF Downloads 137
27214 A Novel Approach of NPSO on Flexible Logistic (S-Shaped) Model for Software Reliability Prediction

Authors: Pooja Rani, G. S. Mahapatra, S. K. Pandey

Abstract:

In this paper, we propose a novel approach of Neural Network and Particle Swarm Optimization methods for software reliability prediction. We first explain how to apply compound function in neural network so that we can derive a Flexible Logistic (S-shaped) Growth Curve (FLGC) model. This model mathematically represents software failure as a random process and can be used to evaluate software development status during testing. To avoid trapping in local minima, we have applied Particle Swarm Optimization method to train proposed model using failure test data sets. We drive our proposed model using computational based intelligence modeling. Thus, proposed model becomes Neuro-Particle Swarm Optimization (NPSO) model. We do test result with different inertia weight to update particle and update velocity. We obtain result based on best inertia weight compare along with Personal based oriented PSO (pPSO) help to choose local best in network neighborhood. The applicability of proposed model is demonstrated through real time test data failure set. The results obtained from experiments show that the proposed model has a fairly accurate prediction capability in software reliability.

Keywords: software reliability, flexible logistic growth curve model, software cumulative failure prediction, neural network, particle swarm optimization

Procedia PDF Downloads 343
27213 3D Printing Perceptual Models of Preference Using a Fuzzy Extreme Learning Machine Approach

Authors: Xinyi Le

Abstract:

In this paper, 3D printing orientations were determined through our perceptual model. Some FDM (Fused Deposition Modeling) 3D printers, which are widely used in universities and industries, often require support structures during the additive manufacturing. After removing the residual material, some surface artifacts remain at the contact points. These artifacts will damage the function and visual effect of the model. To prevent the impact of these artifacts, we present a fuzzy extreme learning machine approach to find printing directions that avoid placing supports in perceptually significant regions. The proposed approach is able to solve the evaluation problem by combing both the subjective knowledge and objective information. Our method combines the advantages of fuzzy theory, auto-encoders, and extreme learning machine. Fuzzy set theory is applied for dealing with subjective preference information, and auto-encoder step is used to extract good features without supervised labels before extreme learning machine. An extreme learning machine method is then developed successfully for training and learning perceptual models. The performance of this perceptual model will be demonstrated on both natural and man-made objects. It is a good human-computer interaction practice which draws from supporting knowledge on both the machine side and the human side.

Keywords: 3d printing, perceptual model, fuzzy evaluation, data-driven approach

Procedia PDF Downloads 438
27212 A Posteriori Trading-Inspired Model-Free Time Series Segmentation

Authors: Plessen Mogens Graf

Abstract:

Within the context of multivariate time series segmentation, this paper proposes a method inspired by a posteriori optimal trading. After a normalization step, time series are treated channelwise as surrogate stock prices that can be traded optimally a posteriori in a virtual portfolio holding either stock or cash. Linear transaction costs are interpreted as hyperparameters for noise filtering. Trading signals, as well as trading signals obtained on the reversed time series, are used for unsupervised channelwise labeling before a consensus over all channels is reached that determines the final segmentation time instants. The method is model-free such that no model prescriptions for segments are made. Benefits of proposed approach include simplicity, computational efficiency, and adaptability to a wide range of different shapes of time series. Performance is demonstrated on synthetic and real-world data, including a large-scale dataset comprising a multivariate time series of dimension 1000 and length 2709. Proposed method is compared to a popular model-based bottom-up approach fitting piecewise affine models and to a recent model-based top-down approach fitting Gaussian models and found to be consistently faster while producing more intuitive results in the sense of segmenting time series at peaks and valleys.

Keywords: time series segmentation, model-free, trading-inspired, multivariate data

Procedia PDF Downloads 134
27211 Statistical Analysis to Select Evacuation Route

Authors: Zaky Musyarof, Dwi Yono Sutarto, Dwima Rindy Atika, R. B. Fajriya Hakim

Abstract:

Each country should be responsible for the safety of people, especially responsible for the safety of people living in disaster-prone areas. One of those services is provides evacuation route for them. But all this time, the selection of evacuation route is seem doesn’t well organized, it could be seen that when a disaster happen, there will be many accumulation of people on the steps of evacuation route. That condition is dangerous to people because hampers evacuation process. By some methods in Statistical analysis, author tries to give a suggestion how to prepare evacuation route which is organized and based on people habit. Those methods are association rules, sequential pattern mining, hierarchical cluster analysis and fuzzy logic.

Keywords: association rules, sequential pattern mining, cluster analysis, fuzzy logic, evacuation route

Procedia PDF Downloads 503
27210 A Clustering-Based Approach for Weblog Data Cleaning

Authors: Amine Ganibardi, Cherif Arab Ali

Abstract:

This paper addresses the data cleaning issue as a part of web usage data preprocessing within the scope of Web Usage Mining. Weblog data recorded by web servers within log files reflect usage activity, i.e., End-users’ clicks and underlying user-agents’ hits. As Web Usage Mining is interested in End-users’ behavior, user-agents’ hits are referred to as noise to be cleaned-off before mining. Filtering hits from clicks is not trivial for two reasons, i.e., a server records requests interlaced in sequential order regardless of their source or type, website resources may be set up as requestable interchangeably by end-users and user-agents. The current methods are content-centric based on filtering heuristics of relevant/irrelevant items in terms of some cleaning attributes, i.e., website’s resources filetype extensions, website’s resources pointed by hyperlinks/URIs, http methods, user-agents, etc. These methods need exhaustive extra-weblog data and prior knowledge on the relevant and/or irrelevant items to be assumed as clicks or hits within the filtering heuristics. Such methods are not appropriate for dynamic/responsive Web for three reasons, i.e., resources may be set up to as clickable by end-users regardless of their type, website’s resources are indexed by frame names without filetype extensions, web contents are generated and cancelled differently from an end-user to another. In order to overcome these constraints, a clustering-based cleaning method centered on the logging structure is proposed. This method focuses on the statistical properties of the logging structure at the requested and referring resources attributes levels. It is insensitive to logging content and does not need extra-weblog data. The used statistical property takes on the structure of the generated logging feature by webpage requests in terms of clicks and hits. Since a webpage consists of its single URI and several components, these feature results in a single click to multiple hits ratio in terms of the requested and referring resources. Thus, the clustering-based method is meant to identify two clusters based on the application of the appropriate distance to the frequency matrix of the requested and referring resources levels. As the ratio clicks to hits is single to multiple, the clicks’ cluster is the smallest one in requests number. Hierarchical Agglomerative Clustering based on a pairwise distance (Gower) and average linkage has been applied to four logfiles of dynamic/responsive websites whose click to hits ratio range from 1/2 to 1/15. The optimal clustering set on the basis of average linkage and maximum inter-cluster inertia results always in two clusters. The evaluation of the smallest cluster referred to as clicks cluster under the terms of confusion matrix indicators results in 97% of true positive rate. The content-centric cleaning methods, i.e., conventional and advanced cleaning, resulted in a lower rate 91%. Thus, the proposed clustering-based cleaning outperforms the content-centric methods within dynamic and responsive web design without the need of any extra-weblog. Such an improvement in cleaning quality is likely to refine dependent analysis.

Keywords: clustering approach, data cleaning, data preprocessing, weblog data, web usage data

Procedia PDF Downloads 170
27209 Asymmetrical Informative Estimation for Macroeconomic Model: Special Case in the Tourism Sector of Thailand

Authors: Chukiat Chaiboonsri, Satawat Wannapan

Abstract:

This paper used an asymmetric informative concept to apply in the macroeconomic model estimation of the tourism sector in Thailand. The variables used to statistically analyze are Thailand international and domestic tourism revenues, the expenditures of foreign and domestic tourists, service investments by private sectors, service investments by the government of Thailand, Thailand service imports and exports, and net service income transfers. All of data is a time-series index which was observed between 2002 and 2015. Empirically, the tourism multiplier and accelerator were estimated by two statistical approaches. The first was the result of the Generalized Method of Moments model (GMM) based on the assumption which the tourism market in Thailand had perfect information (Symmetrical data). The second was the result of the Maximum Entropy Bootstrapping approach (MEboot) based on the process that attempted to deal with imperfect information and reduced uncertainty in data observations (Asymmetrical data). In addition, the tourism leakages were investigated by a simple model based on the injections and leakages concept. The empirical findings represented the parameters computed from the MEboot approach which is different from the GMM method. However, both of the MEboot estimation and GMM model suggests that Thailand’s tourism sectors are in a period capable of stimulating the economy.

Keywords: TThailand tourism, Maximum Entropy Bootstrapping approach, macroeconomic model, asymmetric information

Procedia PDF Downloads 294
27208 Mine Project Evaluations in the Rising of Uncertainty: Real Options Analysis

Authors: I. Inthanongsone, C. Drebenstedt, J. C. Bongaerts, P. Sontamino

Abstract:

The major concern in evaluating the value of mining projects related to the deficiency of the traditional discounted cash flow (DCF) method. This method does not take uncertainties into account and, hence it does not allow for an economic assessment of managerial flexibility and operational adaptability, which are increasingly determining long-term corporate success. Such an assessment can be performed with the real options valuation (ROV) approach, since it allows for a comparative evaluation of unforeseen uncertainties in a project life cycle. This paper presents an economic evaluation model for open pit mining projects based on real options valuation approach. Uncertainties in the model are caused by metal prices and cost uncertainties and the system dynamics (SD) modeling method is used to structure and solve the real options model. The model is applied to a case study. It can be shown that that managerial flexibility reacting to uncertainties may create additional value to a mining project in comparison to the outcomes of a DCF method. One important insight for management dealing with uncertainty is seen in choosing the optimal time to exercise strategic options.

Keywords: DCF methods, ROV approach, system dynamics modeling methods, uncertainty

Procedia PDF Downloads 500
27207 Using Genetic Algorithms and Rough Set Based Fuzzy K-Modes to Improve Centroid Model Clustering Performance on Categorical Data

Authors: Rishabh Srivastav, Divyam Sharma

Abstract:

We propose an algorithm to cluster categorical data named as ‘Genetic algorithm initialized rough set based fuzzy K-Modes for categorical data’. We propose an amalgamation of the simple K-modes algorithm, the Rough and Fuzzy set based K-modes and the Genetic Algorithm to form a new algorithm,which we hypothesise, will provide better Centroid Model clustering results, than existing standard algorithms. In the proposed algorithm, the initialization and updation of modes is done by the use of genetic algorithms while the membership values are calculated using the rough set and fuzzy logic.

Keywords: categorical data, fuzzy logic, genetic algorithm, K modes clustering, rough sets

Procedia PDF Downloads 246
27206 Deterministic and Stochastic Modeling of a Micro-Grid Management for Optimal Power Self-Consumption

Authors: D. Calogine, O. Chau, S. Dotti, O. Ramiarinjanahary, P. Rasoavonjy, F. Tovondahiniriko

Abstract:

Mafate is a natural circus in the north-western part of Reunion Island, without an electrical grid and road network. A micro-grid concept is being experimented in this area, composed of a photovoltaic production combined with electrochemical batteries, in order to meet the local population for self-consumption of electricity demands. This work develops a discrete model as well as a stochastic model in order to reach an optimal equilibrium between production and consumptions for a cluster of houses. The management of the energy power leads to a large linearized programming system, where the time interval of interest is 24 hours The experimental data are solar production, storage energy, and the parameters of the different electrical devices and batteries. The unknown variables to evaluate are the consumptions of the various electrical services, the energy drawn from and stored in the batteries, and the inhabitants’ planning wishes. The objective is to fit the solar production to the electrical consumption of the inhabitants, with an optimal use of the energies in the batteries by satisfying as widely as possible the users' planning requirements. In the discrete model, the different parameters and solutions of the linear programming system are deterministic scalars. Whereas in the stochastic approach, the data parameters and the linear programming solutions become random variables, then the distributions of which could be imposed or established by estimation from samples of real observations or from samples of optimal discrete equilibrium solutions.

Keywords: photovoltaic production, power consumption, battery storage resources, random variables, stochastic modeling, estimations of probability distributions, mixed integer linear programming, smart micro-grid, self-consumption of electricity.

Procedia PDF Downloads 110
27205 A “Best Practice” Model for Physical Education in the BRICS Countries

Authors: Vasti Oelofse, Niekie van der Merwe, Dorita du Toit

Abstract:

This study addresses the need for a unified best practice model for Physical Education across BRICS nations, as current research primarily offers individual country recommendations. Drawing on relevant literature within the framework of Bronfenbrenner’s Ecological Systems Theory, as well as data from open-ended questionnaires completed by Physical Education experts from the BRICS countries, , the study develops a best practice model based on identified challenges and effective practices in Physical Education. A model is proposed that incorporates flexible and resource-efficient strategies tailored to address PE challenges specific to these countries, enhancing outcomes for learners, empowering teachers, and fostering systemic collaboration among BRICS members. The proposed model comprises six key areas: “Curriculum and policy requirements”, “General approach”, “Theoretical basis”, “Strategies for presenting content”, “Teacher training”, and “Evaluation”. The “Strategies for presenting program content” area addresses both well-resourced and poorly resourced schools, adapting curriculum, teaching strategies, materials, and learner activities for varied socio-economic contexts. The model emphasizes a holistic approach to learner development, engaging environments, and continuous teacher training. A collaborative approach among BRICS countries, focusing on shared best practices and continuous improvement, is vital for the model's successful implementation, enhancing Physical Education programs and outcomes across these nations.

Keywords: BRICS countries, physical education, best practice model, ecological systems theory

Procedia PDF Downloads 12
27204 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 166
27203 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach

Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini

Abstract:

Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms

Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing

Procedia PDF Downloads 158
27202 Developing a Sustainable Business Model for Platform-Based Applications in Small and Medium-Sized Enterprise Sawmills: A Systematic Approach

Authors: Franziska Mais, Till Gramberg

Abstract:

The paper presents the development of a sustainable business model for a platform-based application tailored for sawing companies in small and medium-sized enterprises (SMEs). The focus is on the integration of sustainability principles into the design of the business model to ensure a technologically advanced, legally sound, and economically efficient solution. Easy2IoT is a research project that aims to enable companies in the prefabrication sheet metal and sheet metal processing industry to enter the Industrial Internet of Things (IIoT) with a low-threshold and cost-effective approach. The methodological approach of Easy2IoT includes an in-depth requirements analysis and customer interviews with stakeholders along the value chain. Based on these insights, actions, requirements, and potential solutions for smart services are derived. The structuring of the business ecosystem within the application plays a central role, whereby the roles of the partners, the management of the IT infrastructure and services, as well as the design of a sustainable operator model are considered. The business model is developed using the value proposition canvas, whereby a detailed analysis of the requirements for the business model is carried out, taking sustainability into account. This includes coordination with the business model patterns, according to Gassmann, and integration into a business model canvas for the Easy2IoT product. Potential obstacles and problems are identified and evaluated in order to formulate a comprehensive and sustainable business model. In addition, sustainable payment models and distribution channels are developed. In summary, the article offers a well-founded insight into the systematic development of a sustainable business model for platform-based applications in SME sawmills, with a particular focus on the synergy of ecological responsibility and economic efficiency.

Keywords: business model, sustainable business model, IIoT, IIoT-platform, industrie 4.0, big data

Procedia PDF Downloads 81
27201 An Inverse Heat Transfer Algorithm for Predicting the Thermal Properties of Tumors during Cryosurgery

Authors: Mohamed Hafid, Marcel Lacroix

Abstract:

This study aimed at developing an inverse heat transfer approach for predicting the time-varying freezing front and the temperature distribution of tumors during cryosurgery. Using a temperature probe pressed against the layer of tumor, the inverse approach is able to predict simultaneously the metabolic heat generation and the blood perfusion rate of the tumor. Once these parameters are predicted, the temperature-field and time-varying freezing fronts are determined with the direct model. The direct model rests on one-dimensional Pennes bioheat equation. The phase change problem is handled with the enthalpy method. The Levenberg-Marquardt Method (LMM) combined to the Broyden Method (BM) is used to solve the inverse model. The effect (a) of the thermal properties of the diseased tissues; (b) of the initial guesses for the unknown thermal properties; (c) of the data capture frequency; and (d) of the noise on the recorded temperatures is examined. It is shown that the proposed inverse approach remains accurate for all the cases investigated.

Keywords: cryosurgery, inverse heat transfer, Levenberg-Marquardt method, thermal properties, Pennes model, enthalpy method

Procedia PDF Downloads 198
27200 A Crop Growth Subroutine for Watershed Resources Management (WRM) Model

Authors: Kingsley Nnaemeka Ogbu, Constantine Mbajiorgu

Abstract:

Vegetation has a marked effect on runoff and has become an important component in hydrologic model. The watershed Resources Management (WRM) model, a process-based, continuous, distributed parameter simulation model developed for hydrologic and soil erosion studies at the watershed scale lack a crop growth component. As such, this model assumes a constant parameter values for vegetation and hydraulic parameters throughout the duration of hydrologic simulation. Our approach is to develop a crop growth algorithm based on the original plant growth model used in the Environmental Policy Integrated Climate Model (EPIC) model. This paper describes the development of a single crop growth model which has the capability of simulating all crops using unique parameter values for each crop. Simulated crop growth processes will reflect the vegetative seasonality of the natural watershed system. An existing model was employed for evaluating vegetative resistance by hydraulic and vegetative parameters incorporated into the WRM model. The improved WRM model will have the ability to evaluate the seasonal variation of the vegetative roughness coefficient with depth of flow and further enhance the hydrologic model’s capability for accurate hydrologic studies

Keywords: crop yield, roughness coefficient, PAR, WRM model

Procedia PDF Downloads 407
27199 Model of Optimal Centroids Approach for Multivariate Data Classification

Authors: Pham Van Nha, Le Cam Binh

Abstract:

Particle swarm optimization (PSO) is a population-based stochastic optimization algorithm. PSO was inspired by the natural behavior of birds and fish in migration and foraging for food. PSO is considered as a multidisciplinary optimization model that can be applied in various optimization problems. PSO’s ideas are simple and easy to understand but PSO is only applied in simple model problems. We think that in order to expand the applicability of PSO in complex problems, PSO should be described more explicitly in the form of a mathematical model. In this paper, we represent PSO in a mathematical model and apply in the multivariate data classification. First, PSOs general mathematical model (MPSO) is analyzed as a universal optimization model. Then, Model of Optimal Centroids (MOC) is proposed for the multivariate data classification. Experiments were conducted on some benchmark data sets to prove the effectiveness of MOC compared with several proposed schemes.

Keywords: analysis of optimization, artificial intelligence based optimization, optimization for learning and data analysis, global optimization

Procedia PDF Downloads 207
27198 Improving the Bioprocess Phenotype of Chinese Hamster Ovary Cells Using CRISPR/Cas9 and Sponge Decoy Mediated MiRNA Knockdowns

Authors: Kevin Kellner, Nga Lao, Orla Coleman, Paula Meleady, Niall Barron

Abstract:

Chinese Hamster Ovary (CHO) cells are the prominent cell line used in biopharmaceutical production. To improve yields and find beneficial bioprocess phenotypes genetic engineering plays an essential role in recent research. The miR-23 cluster, specifically miR-24 and miR-27, was first identified as differentially expressed during hypothermic conditions suggesting a role in proliferation and productivity in CHO cells. In this study, we used sponge decoy technology to stably deplete the miRNA expression of the cluster. Furthermore, we implemented the CRISPR/Cas9 system to knockdown miRNA expression. Sponge constructs were designed for an imperfect binding of the miRNA target, protecting from RISC mediated cleavage. GuideRNAs for the CRISPR/Cas9 system were designed to target the seed region of the miRNA. The expression of mature miRNA and precursor were confirmed using RT-qPCR. For both approaches stable expressing mixed populations were generated and characterised in batch cultures. It was shown, that CRISPR/Cas9 can be implemented in CHO cells with achieving high knockdown efficacy of every single member of the cluster. Targeting of one miRNA member showed that its genomic paralog is successfully targeted as well. The stable depletion of miR-24 using CRISPR/Cas9 showed increased growth and specific productivity in a CHO-K1 mAb expressing cell line. This phenotype was further characterized using quantitative label-free LC-MS/MS showing 186 proteins differently expressed with 19 involved in proliferation and 26 involved in protein folding/translation. Targeting miR-27 in the same cell line showed increased viability in late stages of the culture compared to the control. To evaluate the phenotype in an industry relevant cell line; the miR-23 cluster, miR-24 and miR-27 were stably depleted in a Fc fusion CHO-S cell line which showed increased batch titers up to 1.5-fold. In this work, we highlighted that the stable depletion of the miR-23 cluster and its members can improve the bioprocess phenotype concerning growth and productivity in two different cell lines. Furthermore, we showed that using CRISPR/Cas9 is comparable to the traditional sponge decoy technology.

Keywords: Chinese Hamster ovary cells, CRISPR/Cas9, microRNAs, sponge decoy technology

Procedia PDF Downloads 197
27197 Capacitated Multiple Allocation P-Hub Median Problem on a Cluster Based Network under Congestion

Authors: Çağrı Özgün Kibiroğlu, Zeynep Turgut

Abstract:

This paper considers a hub location problem where the network service area partitioned into predetermined zones (represented by node clusters is given) and potential hub nodes capacity levels are determined a priori as a selection criteria of hub to investigate congestion effect on network. The objective is to design hub network by determining all required hub locations in the node clusters and also allocate non-hub nodes to hubs such that the total cost including transportation cost, opening cost of hubs and penalty cost for exceed of capacity level at hubs is minimized. A mixed integer linear programming model is developed introducing additional constraints to the traditional model of capacitated multiple allocation hub location problem and empirically tested.

Keywords: hub location problem, p-hub median problem, clustering, congestion

Procedia PDF Downloads 490