Search results for: Teaching learning model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9018

Search results for: Teaching learning model

5748 Evaluation of the Weight-Based and Fat-Based Indices in Relation to Basal Metabolic Rate-to-Weight Ratio

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Basal metabolic rate is questioned as a risk factor for weight gain. The relations between basal metabolic rate and body composition have not been cleared yet. The impact of fat mass on basal metabolic rate is also uncertain. Within this context, indices based upon total body mass as well as total body fat mass are available. In this study, the aim is to investigate the potential clinical utility of these indices in the adult population. 287 individuals, aged from 18 to 79 years, were included into the scope of the study. Based upon body mass index values, 10 underweight, 88 normal, 88 overweight, 81 obese, and 20 morbid obese individuals participated. Anthropometric measurements including height (m), and weight (kg) were performed. Body mass index, diagnostic obesity notation model assessment index I, diagnostic obesity notation model assessment index II, basal metabolic rate-to-weight ratio were calculated. Total body fat mass (kg), fat percent (%), basal metabolic rate, metabolic age, visceral adiposity, fat mass of upper as well as lower extremities and trunk, obesity degree were measured by TANITA body composition monitor using bioelectrical impedance analysis technology. Statistical evaluations were performed by statistical package (SPSS) for Windows Version 16.0. Scatterplots of individual measurements for the parameters concerning correlations were drawn. Linear regression lines were displayed. The statistical significance degree was accepted as p < 0.05. The strong correlations between body mass index and diagnostic obesity notation model assessment index I as well as diagnostic obesity notation model assessment index II were obtained (p < 0.001). A much stronger correlation was detected between basal metabolic rate and diagnostic obesity notation model assessment index I in comparison with that calculated for basal metabolic rate and body mass index (p < 0.001). Upon consideration of the associations between basal metabolic rate-to-weight ratio and these three indices, the best association was observed between basal metabolic rate-to-weight and diagnostic obesity notation model assessment index II. In a similar manner, this index was highly correlated with fat percent (p < 0.001). Being independent of the indices, a strong correlation was found between fat percent and basal metabolic rate-to-weight ratio (p < 0.001). Visceral adiposity was much strongly correlated with metabolic age when compared to that with chronological age (p < 0.001). In conclusion, all three indices were associated with metabolic age, but not with chronological age. Diagnostic obesity notation model assessment index II values were highly correlated with body mass index values throughout all ranges starting with underweight going towards morbid obesity. This index is the best in terms of its association with basal metabolic rate-to-weight ratio, which can be interpreted as basal metabolic rate unit.

Keywords: Basal metabolic rate, body mass index, children, diagnostic obesity notation model assessment index, obesity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 952
5747 Probabilistic Electrical Power Generation Modeling Using Decimal to Binary Conversion

Authors: Ahmed S. Al-Abdulwahab

Abstract:

Generation system reliability assessment is an important task which can be performed using deterministic or probabilistic techniques. The probabilistic approaches have significant advantages over the deterministic methods. However, more complicated modeling is required by the probabilistic approaches. Power generation model is a basic requirement for this assessment. One form of the generation models is the well known capacity outage probability table (COPT). Different analytical techniques have been used to construct the COPT. These approaches require considerable mathematical modeling of the generating units. The unit-s models are combined to build the COPT which will add more burdens on the process of creating the COPT. Decimal to Binary Conversion (DBC) technique is widely and commonly applied in electronic systems and computing This paper proposes a novel utilization of the DBC to create the COPT without engaging in analytical modeling or time consuming simulations. The simple binary representation , “0 " and “1 " is used to model the states o f generating units. The proposed technique is proven to be an effective approach to build the generation model.

Keywords: Decimal to Binary, generation, reliability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1989
5746 Sensitivity Analysis of Principal Stresses in Concrete Slab of Rigid Pavement Made From Recycled Materials

Authors: Aleš Florian, Lenka Ševelová

Abstract:

Complex sensitivity analysis of stresses in a concrete slab of the real type of rigid pavement made from recycled materials is performed. The computational model of the pavement is designed as a spatial (3D) model, is based on a nonlinear variant of the finite element method that respects the structural nonlinearity, enables to model different arrangements of joints, and the entire model can be loaded by the thermal load. Interaction of adjacent slabs in joints and contact of the slab and the subsequent layer are modeled with the help of special contact elements. Four concrete slabs separated by transverse and longitudinal joints and the additional structural layers and soil to the depth of about 3m are modeled. The thickness of individual layers, physical and mechanical properties of materials, characteristics of joints, and the temperature of the upper and lower surface of slabs are supposed to be random variables. The modern simulation technique Updated Latin Hypercube Sampling with 20 simulations is used. For sensitivity analysis the sensitivity coefficient based on the Spearman rank correlation coefficient is utilized. As a result, the estimates of influence of random variability of individual input variables on the random variability of principal stresses s1 and s3 in 53 points on the upper and lower surface of the concrete slabs are obtained.

Keywords: Concrete, FEM, pavement, sensitivity, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2088
5745 Effect of Viscous Dissipation and Axial Conduction in Thermally Developing Region of the Channel Partially Filled with a Porous Material Subjected to Constant Wall Heat Flux

Authors: D Bhargavi, J. Sharath Kumar Reddy

Abstract:

The present investigation has been undertaken to assess the effect of viscous dissipation and axial conduction on forced convection heat transfer in the entrance region of a parallel plate channel with the porous insert attached to both walls of the channel. The flow field is unidirectional. Flow in the porous region corresponds to Darcy-Brinkman model and the clear fluid region to that of plane Poiseuille flow. The effects of the parameters Darcy number, Da, Peclet number, Pe, Brinkman number, Br and a porous fraction γp on the local heat transfer coefficient are analyzed graphically. Effects of viscous dissipation employing the Darcy model and the clear fluid compatible model have been studied.

Keywords: Porous material, channel partially filled with a porous material, axial conduction, viscous dissipation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 587
5744 Ontology-Based Approach for Temporal Semantic Modeling of Social Networks

Authors: Souâad Boudebza, Omar Nouali, Faiçal Azouaou

Abstract:

Social networks have recently gained a growing interest on the web. Traditional formalisms for representing social networks are static and suffer from the lack of semantics. In this paper, we will show how semantic web technologies can be used to model social data. The SemTemp ontology aligns and extends existing ontologies such as FOAF, SIOC, SKOS and OWL-Time to provide a temporal and semantically rich description of social data. We also present a modeling scenario to illustrate how our ontology can be used to model social networks.

Keywords: Ontology, semantic web, social network, temporal modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1505
5743 From I.A Richards to Web 3.0: Preparing Our Students for Tomorrow's World

Authors: Karen Armstrong

Abstract:

This paper offers suggestions for educators at all levels about how to better prepare our students for the future, by building on the past. The discussion begins with a summary of changes in the World Wide Web, especially as the term Web 3.0 is being heard. The bulk of the discussion is retrospective and concerned with an overview of traditional teaching and research approaches as they evolved during the 20th century beginning with those grounded in the Cartesian reality of IA Richards- (1929) Practical Criticism. The paper concludes with a proposal of five strategies which incorporate timeless elements from the past as well as cutting-edge elements from today, in order to better prepare our students for the future.

Keywords: Web 3.0, Web 2.0 IA Richards, literacy education, new literacies, technology, paradigm shifts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1841
5742 Topics of Blockchain Technology to Teach at Community College

Authors: Penn P. Wu, Jeannie Jo

Abstract:

Blockchain technology has rapidly gained popularity in industry. This paper attempts to assist academia to answer four questions. First, should community colleges begin offering education to nurture blockchain-literate students for the job market? Second, what are the appropriate topical areas to cover? Third, should it be an individual course? And forth, should it be a technical or management course? This paper starts with identifying the knowledge domains of blockchain technology and the topical areas each domain has, and continues with placing them in appropriate academic territories (Computer Sciences vs. Business) and subjects (programming, management, marketing, and laws), and then develops an evaluation model to determine the appropriate topical area for community colleges to teach. The evaluation is based on seven factors: maturity of technology, impacts on management, real-world applications, subject classification, knowledge prerequisites, textbook readiness, and recommended pedagogies. The evaluation results point to an interesting direction that offering an introductory course is an ideal option to guide students through the learning journey of what blockchain is and how it applies to business. Such an introductory course does not need to engage students in the discussions of mathematics and sciences that make blockchain technologies possible. While it is inevitable to brief technical topics to help students build a solid knowledge foundation of blockchain technologies, community colleges should avoid offering students a course centered on the discussion of developing blockchain applications.

Keywords: Blockchain, pedagogies, blockchain technologies, blockchain course, blockchain pedagogies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 867
5741 Hierarchical Clustering Analysis with SOM Networks

Authors: Diego Ordonez, Carlos Dafonte, Minia Manteiga, Bernardino Arcayy

Abstract:

This work presents a neural network model for the clustering analysis of data based on Self Organizing Maps (SOM). The model evolves during the training stage towards a hierarchical structure according to the input requirements. The hierarchical structure symbolizes a specialization tool that provides refinements of the classification process. The structure behaves like a single map with different resolutions depending on the region to analyze. The benefits and performance of the algorithm are discussed in application to the Iris dataset, a classical example for pattern recognition.

Keywords: Neural networks, Self-organizing feature maps, Hierarchicalsystems, Pattern clustering methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1893
5740 Prediction of Compressive Strength of Self- Compacting Concrete with Fuzzy Logic

Authors: Paratibha Aggarwal, Yogesh Aggarwal

Abstract:

The paper presents the potential of fuzzy logic (FL-I) and neural network techniques (ANN-I) for predicting the compressive strength, for SCC mixtures. Six input parameters that is contents of cement, sand, coarse aggregate, fly ash, superplasticizer percentage and water-to-binder ratio and an output parameter i.e. 28- day compressive strength for ANN-I and FL-I are used for modeling. The fuzzy logic model showed better performance than neural network model.

Keywords: Self compacting concrete, compressive strength, prediction, neural network, Fuzzy logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2403
5739 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: G. Candel, D. Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embedding. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic, and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n2) to O(n2/k), and the memory requirement from n2 to 2(n/k)2 which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: Concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 423
5738 Transfer Function of Piezoelectric Material

Authors: C. Worakitjaroenphon, A. Oonsivilai

Abstract:

The study of piezoelectric material in the past was in T-Domain form; however, no one has studied piezoelectric material in the S-Domain form. This paper will present the piezoelectric material in the transfer function or S-Domain model. S-Domain is a well known mathematical model, used for analyzing the stability of the material and determining the stability limits. By using S-Domain in testing stability of piezoelectric material, it will provide a new tool for the scientific world to study this material in various forms.

Keywords: Piezoelectric, Stability, S-Domain, Transfer function

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3817
5737 Performing Diagnosis in Building with Partially Valid Heterogeneous Tests

Authors: Houda Najeh, Mahendra Pratap Singh, Stéphane Ploix, Antoine Caucheteux, Karim Chabir, Mohamed Naceur Abdelkrim

Abstract:

Building system is highly vulnerable to different kinds of faults and human misbehaviors. Energy efficiency and user comfort are directly targeted due to abnormalities in building operation. The available fault diagnosis tools and methodologies particularly rely on rules or pure model-based approaches. It is assumed that model or rule-based test could be applied to any situation without taking into account actual testing contexts. Contextual tests with validity domain could reduce a lot of the design of detection tests. The main objective of this paper is to consider fault validity when validate the test model considering the non-modeled events such as occupancy, weather conditions, door and window openings and the integration of the knowledge of the expert on the state of the system. The concept of heterogeneous tests is combined with test validity to generate fault diagnoses. A combination of rules, range and model-based tests known as heterogeneous tests are proposed to reduce the modeling complexity. Calculation of logical diagnoses coming from artificial intelligence provides a global explanation consistent with the test result. An application example shows the efficiency of the proposed technique: an office setting at Grenoble Institute of Technology.

Keywords: Heterogeneous tests, validity, building system, sensor grids, sensor fault, diagnosis, fault detection and isolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 576
5736 Improvement Approach on Rotor Time Constant Adaptation with Optimum Flux in IFOC for Induction Machines Drives

Authors: S. Grouni, R. Ibtiouen, M. Kidouche, O. Touhami

Abstract:

Induction machine models used for steady-state and transient analysis require machine parameters that are usually considered design parameters or data. The knowledge of induction machine parameters is very important for Indirect Field Oriented Control (IFOC). A mismatched set of parameters will degrade the response of speed and torque control. This paper presents an improvement approach on rotor time constant adaptation in IFOC for Induction Machines (IM). Our approach tends to improve the estimation accuracy of the fundamental model for flux estimation. Based on the reduced order of the IM model, the rotor fluxes and rotor time constant are estimated using only the stator currents and voltages. This reduced order model offers many advantages for real time identification parameters of the IM.

Keywords: Indirect Field Oriented Control (IFOC), InductionMachine (IM), Rotor Time Constant, Parameters ApproachAdaptation. Optimum rotor flux.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1660
5735 Considering Assembly Operations and Product Structure for Manufacturing Cell Formation

Authors: M.B. Aryanezhad, J. Aliabadi

Abstract:

This paper considers the integration of assembly operations and product structure to Cellular Manufacturing System (CMS) design so that to correct the drawbacks of previous researches in the literature. For this purpose, a new mathematical model is developed which dedicates machining and assembly operations to manufacturing cells while the objective function is to minimize the intercellular movements resulting due to both of them. A linearization method is applied to achieve optimum solution through solving aforementioned nonlinear model by common programming language such as Lingo. Then, using different examples and comparing the results, the importance of integrating assembly considerations is demonstrated.

Keywords: Assembly operations and Product structure, CellFormation, Genetic Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
5734 A New Reliability Based Channel Allocation Model in Mobile Networks

Authors: Anujendra, Parag Kumar Guha Thakurta

Abstract:

The data transmission between mobile hosts and base stations (BSs) in Mobile networks are often vulnerable to failure. So, efficient link connectivity, in terms of the services of both base stations and communication channels of the network, is required in wireless mobile networks to achieve highly reliable data transmission. In addition, it is observed that the number of blocked hosts is increased due to insufficient number of channels during heavy load in the network. Under such scenario, the channels are allocated accordingly to offer a reliable communication at any given time. Therefore, a reliability-based channel allocation model with acceptable system performance is proposed as a MOO problem in this paper. Two conflicting parameters known as Resource Reuse factor (RRF) and the number of blocked calls are optimized under reliability constraint in this problem. The solution to such MOO problem is obtained through NSGA-II (Non dominated Sorting Genetic Algorithm). The effectiveness of the proposed model in this work is shown with a set of experimental results.

Keywords: Base station, channel, GA, Pareto-optimal, reliability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1869
5733 Improving Order Quantity Model with Emergency Safety Stock (ESS)

Authors: Yousef Abu Nahleh, Alhasan Hakami, Arun Kumar, Fugen Daver

Abstract:

This study considers the problem of calculating safety stocks in disaster situations inventory systems that face demand uncertainties. Safety stocks are essential to make the supply chain, which is controlled by forecasts of customer needs, in response to demand uncertainties and to reach predefined goal service levels. To solve the problem of uncertainties due to the disaster situations affecting the industry sector, the concept of Emergency Safety Stock (ESS) was proposed. While there exists a huge body of literature on determining safety stock levels, this literature does not address the problem arising due to the disaster and dealing with the situations. In this paper, the problem of improving the Order Quantity Model to deal with uncertainty of demand due to disasters is managed by incorporating a new idea called ESS which is based on the probability of disaster occurrence and uses probability matrix calculated from the historical data. 

Keywords: Emergency Safety Stocks, Safety stocks, Order Quantity Model, Supply chain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2818
5732 Calibration of the Discrete Element Method Using a Large Shear Box

Authors: Corné J. Coetzee, Etienne Horn

Abstract:

One of the main challenges in using the Discrete Element Method (DEM) is to specify the correct input parameter values. In general, the models are sensitive to the input parameter values and accurate results can only be achieved if the correct values are specified. For the linear contact model, micro-parameters such as the particle density, stiffness, coefficient of friction, as well as the particle size and shape distributions are required. There is a need for a procedure to accurately calibrate these parameters before any attempt can be made to accurately model a complete bulk materials handling system. Since DEM is often used to model applications in the mining and quarrying industries, a calibration procedure was developed for materials that consist of relatively large (up to 40 mm in size) particles. A coarse crushed aggregate was used as the test material. Using a specially designed large shear box with a diameter of 590 mm, the confined Young’s modulus (bulk stiffness) and internal friction angle of the material were measured by means of the confined compression test and the direct shear test respectively. DEM models of the experimental setup were developed and the input parameter values were varied iteratively until a close correlation between the experimental and numerical results was achieved. The calibration process was validated by modelling the pull-out of an anchor from a bed of material. The model results compared well with experimental measurement.

Keywords: Discrete Element Method (DEM), calibration, shear box, anchor pull-out.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2608
5731 Potential Climate Change Impacts on the Hydrological System of the Harvey River Catchment

Authors: Hashim Isam Jameel Al-Safi, P. Ranjan Sarukkalige

Abstract:

Climate change is likely to impact the Australian continent by changing the trends of rainfall, increasing temperature, and affecting the accessibility of water quantity and quality. This study investigates the possible impacts of future climate change on the hydrological system of the Harvey River catchment in Western Australia by using the conceptual modelling approach (HBV mode). Daily observations of rainfall and temperature and the long-term monthly mean potential evapotranspiration, from six weather stations, were available for the period (1961-2015). The observed streamflow data at Clifton Park gauging station for 33 years (1983-2015) in line with the observed climate variables were used to run, calibrate and validate the HBV-model prior to the simulation process. The calibrated model was then forced with the downscaled future climate signals from a multi-model ensemble of fifteen GCMs of the CMIP3 model under three emission scenarios (A2, A1B and B1) to simulate the future runoff at the catchment outlet. Two periods were selected to represent the future climate conditions including the mid (2046-2065) and late (2080-2099) of the 21st century. A control run, with the reference climate period (1981-2000), was used to represent the current climate status. The modelling outcomes show an evident reduction in the mean annual streamflow during the mid of this century particularly for the A1B scenario relative to the control run. Toward the end of the century, all scenarios show a relatively high reduction trends in the mean annual streamflow, especially the A1B scenario, compared to the control run. The decline in the mean annual streamflow ranged between 4-15% during the mid of the current century and 9-42% by the end of the century.

Keywords: Climate change impact, Harvey catchment, HBV model, hydrological modelling, GCMs, LARS-WG, Australia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1383
5730 A Product Development for Green Logistics Model by Integrated Evaluation of Design and Manufacturing and Green Supply Chain

Authors: Yuan-Jye Tseng, Yen-Jung Wang

Abstract:

A product development for green logistics model using the fuzzy analytic network process method is presented for evaluating the relationships among the product design, the manufacturing activities, and the green supply chain. In the product development stage, there can be alternative ways to design the detailed components to satisfy the design concept and product requirement. In different design alternative cases, the manufacturing activities can be different. In addition, the manufacturing activities can affect the green supply chain of the components and product. In this research, a fuzzy analytic network process evaluation model is presented for evaluating the criteria in product design, manufacturing activities, and green supply chain. The comparison matrices for evaluating the criteria among the three groups are established. The total relational values between the three groups represent the relationships and effects. In application, the total relational values can be used to evaluate the design alternative cases for decision-making to select a suitable design case and the green supply chain. In this presentation, an example product is illustrated. It shows that the model is useful for integrated evaluation of design and manufacturing and green supply chain for the purpose of product development for green logistics.

Keywords: Supply chain management, green supply chain, product development for logistics, fuzzy analytic network process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2191
5729 A Bayesian Kernel for the Prediction of Protein- Protein Interactions

Authors: Hany Alashwal, Safaai Deris, Razib M. Othman

Abstract:

Understanding proteins functions is a major goal in the post-genomic era. Proteins usually work in context of other proteins and rarely function alone. Therefore, it is highly relevant to study the interaction partners of a protein in order to understand its function. Machine learning techniques have been widely applied to predict protein-protein interactions. Kernel functions play an important role for a successful machine learning technique. Choosing the appropriate kernel function can lead to a better accuracy in a binary classifier such as the support vector machines. In this paper, we describe a Bayesian kernel for the support vector machine to predict protein-protein interactions. The use of Bayesian kernel can improve the classifier performance by incorporating the probability characteristic of the available experimental protein-protein interactions data that were compiled from different sources. In addition, the probabilistic output from the Bayesian kernel can assist biologists to conduct more research on the highly predicted interactions. The results show that the accuracy of the classifier has been improved using the Bayesian kernel compared to the standard SVM kernels. These results imply that protein-protein interaction can be predicted using Bayesian kernel with better accuracy compared to the standard SVM kernels.

Keywords: Bioinformatics, Protein-protein interactions, Bayesian Kernel, Support Vector Machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2110
5728 The Use of Minor Setups in an EPQ Model with Constrained Production Period Length

Authors: Behrouz Afshar Nadjafi

Abstract:

Extensive research has been devoted to economic production quantity (EPQ) problem. However, no attention has been paid to problems where production period length is constrained. In this paper, we address the problem of deciding the optimal production quantity and the number of minor setups within each cycle, in which, production period length is constrained but a minor setup is possible for pass the constraint. A mathematical model is developed and Iterated Local Search (ILS) is proposed to solve this problem. Finally, solution procedure illustrated with a numerical example and results are analyzed.

Keywords: EPQ, Inventory control, minor setup, ILS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1290
5727 Normal and Peaberry Coffee Beans Classification from Green Coffee Bean Images Using Convolutional Neural Networks and Support Vector Machine

Authors: Hira Lal Gope, Hidekazu Fukai

Abstract:

The aim of this study is to develop a system which can identify and sort peaberries automatically at low cost for coffee producers in developing countries. In this paper, the focus is on the classification of peaberries and normal coffee beans using image processing and machine learning techniques. The peaberry is not bad and not a normal bean. The peaberry is born in an only single seed, relatively round seed from a coffee cherry instead of the usual flat-sided pair of beans. It has another value and flavor. To make the taste of the coffee better, it is necessary to separate the peaberry and normal bean before green coffee beans roasting. Otherwise, the taste of total beans will be mixed, and it will be bad. In roaster procedure time, all the beans shape, size, and weight must be unique; otherwise, the larger bean will take more time for roasting inside. The peaberry has a different size and different shape even though they have the same weight as normal beans. The peaberry roasts slower than other normal beans. Therefore, neither technique provides a good option to select the peaberries. Defect beans, e.g., sour, broken, black, and fade bean, are easy to check and pick up manually by hand. On the other hand, the peaberry pick up is very difficult even for trained specialists because the shape and color of the peaberry are similar to normal beans. In this study, we use image processing and machine learning techniques to discriminate the normal and peaberry bean as a part of the sorting system. As the first step, we applied Deep Convolutional Neural Networks (CNN) and Support Vector Machine (SVM) as machine learning techniques to discriminate the peaberry and normal bean. As a result, better performance was obtained with CNN than with SVM for the discrimination of the peaberry. The trained artificial neural network with high performance CPU and GPU in this work will be simply installed into the inexpensive and low in calculation Raspberry Pi system. We assume that this system will be used in under developed countries. The study evaluates and compares the feasibility of the methods in terms of accuracy of classification and processing speed.

Keywords: Convolutional neural networks, coffee bean, peaberry, sorting, support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443
5726 A Study on Removal of Toluidine Blue Dye from Aqueous Solution by Adsorption onto Neem Leaf Powder

Authors: Himanshu Patel, R. T. Vashi

Abstract:

Adsorption of Toluidine blue dye from aqueous solutions onto Neem Leaf Powder (NLP) has been investigated. The surface characterization of this natural material was examined by Particle size analysis, Scanning Electron Microscopy (SEM), Fourier Transform Infrared (FTIR) spectroscopy and X-Ray Diffraction (XRD). The effects of process parameters such as initial concentration, pH, temperature and contact duration on the adsorption capacities have been evaluated, in which pH has been found to be most effective parameter among all. The data were analyzed using the Langmuir and Freundlich for explaining the equilibrium characteristics of adsorption. And kinetic models like pseudo first- order, second-order model and Elovich equation were utilized to describe the kinetic data. The experimental data were well fitted with Langmuir adsorption isotherm model and pseudo second order kinetic model. The thermodynamic parameters, such as Free energy of adsorption (AG"), enthalpy change (AH') and entropy change (AS°) were also determined and evaluated.

Keywords: Adsorption, isotherm models, kinetic models, temperature, toluidine blue dye, surface chemistry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744
5725 Decision-Making Criteria of PPP Projects: Stakeholder Theoretic Perspective

Authors: Xueqin Shan, Wenhua Hou, Xiaosu Ye, Chuanming Wu

Abstract:

Any decision-making is based on certain theory. Taking the public rental housing in Chongqing municipality as an example, this essay states that the stakeholder theory can provide innovative criteria and evaluation methods for Public Private Partnership (PPP) projects. It gives an analysis of how to choose decision-making criteria for different stakeholders in the PPP model and what measures to take to meet the criteria to form “symbiotic" decision-making mode through contracts and to boost the application of PPP model in large-scale public programs in China.

Keywords: PPP, Stakeholder Theory, Stakeholders, Decision- making Criteria

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2369
5724 Instability of Electron Plasma Waves in an Electron-Hole Bounded Quantum Dusty Plasma

Authors: Basudev Ghosh, Sailendranath Paul, Sreyasi Banerjee

Abstract:

Using quantum hydrodynamical (QHD) model the linear dispersion relation for the electron plasma waves propagating in a cylindrical waveguide filled with a dense plasma containing streaming electron, hole and stationary charged dust particles has been derived. It is shown that the effect of finite boundary and stream velocity of electrons and holes make some of the possible modes of propagation linearly unstable. The growth rate of this instability is shown to depend significantly on different plasma parameters.

Keywords: Electron Plasma wave, Quantum plasma, Quantum Hydrodynamical model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1652
5723 Reliability-Based Life-Cycle Cost Model for Engineering Systems

Authors: Reza Lotfalian, Sudarshan Martins, Peter Radziszewski

Abstract:

The effect of reliability on life-cycle cost, including initial and maintenance cost of a system is studied. The failure probability of a component is used to calculate the average maintenance cost during the operation cycle of the component. The standard deviation of the life-cycle cost is also calculated as an error measure for the average life-cycle cost. As a numerical example, the model is used to study the average life-cycle cost of an electric motor.

Keywords: Initial Cost, Life-cycle cost, Maintenance Cost, Reliability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2167
5722 Simulation Tools for Fixed Point DSP Algorithms and Architectures

Authors: K. B. Cullen, G. C. M. Silvestre, N. J. Hurley

Abstract:

This paper presents software tools that convert the C/Cµ floating point source code for a DSP algorithm into a fixedpoint simulation model that can be used to evaluate the numericalperformance of the algorithm on several different fixed pointplatforms including microprocessors, DSPs and FPGAs. The tools use a novel system for maintaining binary point informationso that the conversion from floating point to fixed point isautomated and the resulting fixed point algorithm achieves maximum possible precision. A configurable architecture is used during the simulation phase so that the algorithm can produce a bit-exact output for several different target devices.

Keywords: DSP devices, DSP algorithm, simulation model, software

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2491
5721 Application of Feed Forward Neural Networks in Modeling and Control of a Fed-Batch Crystallization Process

Authors: Petia Georgieva, Sebastião Feyo de Azevedo

Abstract:

This paper is focused on issues of nonlinear dynamic process modeling and model-based predictive control of a fed-batch sugar crystallization process applying the concept of artificial neural networks as computational tools. The control objective is to force the operation into following optimal supersaturation trajectory. It is achieved by manipulating the feed flow rate of sugar liquor/syrup, considered as the control input. A feed forward neural network (FFNN) model of the process is first built as part of the controller structure to predict the process response over a specified (prediction) horizon. The predictions are supplied to an optimization procedure to determine the values of the control action over a specified (control) horizon that minimizes a predefined performance index. The control task is rather challenging due to the strong nonlinearity of the process dynamics and variations in the crystallization kinetics. However, the simulation results demonstrated smooth behavior of the control actions and satisfactory reference tracking.

Keywords: Feed forward neural network, process modelling, model predictive control, crystallization process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1816
5720 Bayesian Networks for Earthquake Magnitude Classification in a Early Warning System

Authors: G. Zazzaro, F.M. Pisano, G. Romano

Abstract:

During last decades, worldwide researchers dedicated efforts to develop machine-based seismic Early Warning systems, aiming at reducing the huge human losses and economic damages. The elaboration time of seismic waveforms is to be reduced in order to increase the time interval available for the activation of safety measures. This paper suggests a Data Mining model able to correctly and quickly estimate dangerousness of the running seismic event. Several thousand seismic recordings of Japanese and Italian earthquakes were analyzed and a model was obtained by means of a Bayesian Network (BN), which was tested just over the first recordings of seismic events in order to reduce the decision time and the test results were very satisfactory. The model was integrated within an Early Warning System prototype able to collect and elaborate data from a seismic sensor network, estimate the dangerousness of the running earthquake and take the decision of activating the warning promptly.

Keywords: Bayesian Networks, Decision Support System, Magnitude Classification, Seismic Early Warning System

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3548
5719 Multi-Objective Evolutionary Computation Based Feature Selection Applied to Behaviour Assessment of Children

Authors: F. Jiménez, R. Jódar, M. Martín, G. Sánchez, G. Sciavicco

Abstract:

Abstract—Attribute or feature selection is one of the basic strategies to improve the performances of data classification tasks, and, at the same time, to reduce the complexity of classifiers, and it is a particularly fundamental one when the number of attributes is relatively high. Its application to unsupervised classification is restricted to a limited number of experiments in the literature. Evolutionary computation has already proven itself to be a very effective choice to consistently reduce the number of attributes towards a better classification rate and a simpler semantic interpretation of the inferred classifiers. We present a feature selection wrapper model composed by a multi-objective evolutionary algorithm, the clustering method Expectation-Maximization (EM), and the classifier C4.5 for the unsupervised classification of data extracted from a psychological test named BASC-II (Behavior Assessment System for Children - II ed.) with two objectives: Maximizing the likelihood of the clustering model and maximizing the accuracy of the obtained classifier. We present a methodology to integrate feature selection for unsupervised classification, model evaluation, decision making (to choose the most satisfactory model according to a a posteriori process in a multi-objective context), and testing. We compare the performance of the classifier obtained by the multi-objective evolutionary algorithms ENORA and NSGA-II, and the best solution is then validated by the psychologists that collected the data.

Keywords: Feature selection, multi-objective evolutionary computation, unsupervised classification, behavior assessment system for children.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1389