Search results for: generalized linear model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8757

Search results for: generalized linear model

5067 Grid-HPA: Predicting Resource Requirements of a Job in the Grid Computing Environment

Authors: M. Bohlouli, M. Analoui

Abstract:

For complete support of Quality of Service, it is better that environment itself predicts resource requirements of a job by using special methods in the Grid computing. The exact and correct prediction causes exact matching of required resources with available resources. After the execution of each job, the used resources will be saved in the active database named "History". At first some of the attributes will be exploit from the main job and according to a defined similarity algorithm the most similar executed job will be exploited from "History" using statistic terms such as linear regression or average, resource requirements will be predicted. The new idea in this research is based on active database and centralized history maintenance. Implementation and testing of the proposed architecture results in accuracy percentage of 96.68% to predict CPU usage of jobs and 91.29% of memory usage and 89.80% of the band width usage.

Keywords: Active Database, Grid Computing, ResourceRequirement Prediction, Scheduling,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1432
5066 GMDH Modeling Based on Polynomial Spline Estimation and Its Applications

Authors: LI qiu-min, TIAN yi-xiang, ZHANG gao-xun

Abstract:

GMDH algorithm can well describe the internal structure of objects. In the process of modeling, automatic screening of model structure and variables ensure the convergence rate.This paper studied a new GMDH model based on polynomial spline  stimation. The polynomial spline function was used to instead of the transfer function of GMDH to characterize the relationship between the input variables and output variables. It has proved that the algorithm has the optimal convergence rate under some conditions. The empirical results show that the algorithm can well forecast Consumer Price Index (CPI).

Keywords: spline, GMDH, nonparametric, bias, forecast.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2135
5065 Managing the Information System Life Cycle in Construction and Manufacturing

Authors: Carlos J. Costa, Manuela Aparício

Abstract:

In this paper we present the information life cycle and analyze the importance of managing the corporate application portfolio across this life cycle. The approach presented here corresponds not just to the extension of the traditional information system development life cycle. This approach is based in the generic life cycle. In this paper it is proposed a model of an information system life cycle, supported in the assumption that a system has a limited life. But, this limited life may be extended. This model is also applied in several cases; being reported here two examples of the framework application in a construction enterprise and in a manufacturing enterprise.

Keywords: Information systems/technology, information systems life cycle, organization engineering, information economics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1808
5064 A Competitive Replica Placement Methodology for Ad Hoc Networks

Authors: Samee Ullah Khan, C. Ardil

Abstract:

In this paper, a mathematical model for data object replication in ad hoc networks is formulated. The derived model is general, flexible and adaptable to cater for various applications in ad hoc networks. We propose a game theoretical technique in which players (mobile hosts) continuously compete in a non-cooperative environment to improve data accessibility by replicating data objects. The technique incorporates the access frequency from mobile hosts to each data object, the status of the network connectivity, and communication costs. The proposed technique is extensively evaluated against four well-known ad hoc network replica allocation methods. The experimental results reveal that the proposed approach outperforms the four techniques in both the execution time and solution quality

Keywords: Data replication, auctions, static allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1402
5063 Speaker Identification by Joint Statistical Characterization in the Log Gabor Wavelet Domain

Authors: Suman Senapati, Goutam Saha

Abstract:

Real world Speaker Identification (SI) application differs from ideal or laboratory conditions causing perturbations that leads to a mismatch between the training and testing environment and degrade the performance drastically. Many strategies have been adopted to cope with acoustical degradation; wavelet based Bayesian marginal model is one of them. But Bayesian marginal models cannot model the inter-scale statistical dependencies of different wavelet scales. Simple nonlinear estimators for wavelet based denoising assume that the wavelet coefficients in different scales are independent in nature. However wavelet coefficients have significant inter-scale dependency. This paper enhances this inter-scale dependency property by a Circularly Symmetric Probability Density Function (CS-PDF) related to the family of Spherically Invariant Random Processes (SIRPs) in Log Gabor Wavelet (LGW) domain and corresponding joint shrinkage estimator is derived by Maximum a Posteriori (MAP) estimator. A framework is proposed based on these to denoise speech signal for automatic speaker identification problems. The robustness of the proposed framework is tested for Text Independent Speaker Identification application on 100 speakers of POLYCOST and 100 speakers of YOHO speech database in three different noise environments. Experimental results show that the proposed estimator yields a higher improvement in identification accuracy compared to other estimators on popular Gaussian Mixture Model (GMM) based speaker model and Mel-Frequency Cepstral Coefficient (MFCC) features.

Keywords: Speaker Identification, Log Gabor Wavelet, Bayesian Bivariate Estimator, Circularly Symmetric Probability Density Function, SIRP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1651
5062 Frequency-Dependent and Full Range Tunable Phase Shifter

Authors: Yufu Yin, Tao Lin, Shanghong Zhao, Zihang Zhu, Xuan Li, Wei Jiang, Qiurong Zheng, Hui Wang

Abstract:

In this paper, a frequency-dependent and tunable phase shifter is proposed and numerically analyzed. The key devices are the dual-polarization binary phase shift keying modulator (DP-BPSK) and the fiber Bragg grating (FBG). The phase-frequency response of the FBG is employed to determine the frequency-dependent phase shift. The simulation results show that a linear phase shift of the recovered output microwave signal which depends on the frequency of the input RF signal is achieved. In addition, by adjusting the power of the RF signal, the full range phase shift from 0° to 360° can be realized. This structure shows the spurious free dynamic range (SFDR) of 70.90 dB·Hz2/3 and 72.11 dB·Hz2/3 under different RF powers.

Keywords: Microwave photonics, phase shifter, spurious free dynamic range, frequency-dependent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1070
5061 Modeling Directional Thermal Radiance Anisotropy for Urban Canopy

Authors: Limin Zhao, Xingfa Gu, C. Tao Yu

Abstract:

one of the significant factors for improving the accuracy of Land Surface Temperature (LST) retrieval is the correct understanding of the directional anisotropy for thermal radiance. In this paper, the multiple scattering effect between heterogeneous non-isothermal surfaces is described rigorously according to the concept of configuration factor, based on which a directional thermal radiance model is built, and the directional radiant character for urban canopy is analyzed. The model is applied to a simple urban canopy with row structure to simulate the change of Directional Brightness Temperature (DBT). The results show that the DBT is aggrandized because of the multiple scattering effects, whereas the change range of DBT is smoothed. The temperature difference, spatial distribution, emissivity of the components can all lead to the change of DBT. The “hot spot" phenomenon occurs when the proportion of high temperature component in the vision field came to a head. On the other hand, the “cool spot" phenomena occur when low temperature proportion came to the head. The “spot" effect disappears only when the proportion of every component keeps invariability. The model built in this paper can be used for the study of directional effect on emissivity, the LST retrieval over urban areas and the adjacency effect of thermal remote sensing pixels.

Keywords: Directional thermal radiance, multiple scattering, configuration factor, urban canopy, hot spot effect

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1605
5060 The Using Artificial Neural Network to Estimate of Chemical Oxygen Demand

Authors: S. Areerachakul

Abstract:

Nowadays, the increase of human population every year results in increasing of water usage and demand. Saen Saep canal is important canal in Bangkok. The main objective of this study is using Artificial Neural Network (ANN) model to estimate the Chemical Oxygen Demand (COD) on data from 11 sampling sites. The data is obtained from the Department of Drainage and Sewerage, Bangkok Metropolitan Administration, during 2007-2011. The twelve parameters of water quality are used as the input of the models. These water quality indices affect the COD. The experimental results indicate that the ANN model provides a high correlation coefficient (R=0.89).

Keywords: Artificial neural network, chemical oxygen demand, estimate, surface water.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2268
5059 The Physics of Gravity: A Hypothesis Based on Classical Physics

Authors: I. V. Kuzminov

Abstract:

The alternative hypothesis of the physics of gravitation is put forward in this paper. The hypothesis is constructed on the laws of classical physics. The process of expansion of the Universe explains the physics of gravity. The expansion of the Universe induces the resistance of gyroscopic forces of electron’s rotation. The second component of gravity forces is the resistance arising from the second derivative of linear expansion. This hypothesis does not reject the existing foundation of settlement, particularly as it is empirically constructed. The forces of gravitation and inertia share a common nature, which has been recognized before. The presented hypothesis does not criticize existing theories of gravitation; rather, it explores a separate theme. It is important to acknowledge that the expansion of the Universe exhibits isotropic characteristics. The proposed hypothesis provides a fundamental direction for further research. It is worth noting that this article does not aim to encompass all possible aspects of future investigations.

Keywords: Gyroscopic forces, the unity of the micro- and macrocosm, the expansion of the universe, the second derivative of expansion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 219
5058 Using the Combined Model of PROMETHEE and Fuzzy Analytic Network Process for Determining Question Weights in Scientific Exams through Data Mining Approach

Authors: Hassan Haleh, Amin Ghaffari, Parisa Farahpour

Abstract:

Need for an appropriate system of evaluating students- educational developments is a key problem to achieve the predefined educational goals. Intensity of the related papers in the last years; that tries to proof or disproof the necessity and adequacy of the students assessment; is the corroborator of this matter. Some of these studies tried to increase the precision of determining question weights in scientific examinations. But in all of them there has been an attempt to adjust the initial question weights while the accuracy and precision of those initial question weights are still under question. Thus In order to increase the precision of the assessment process of students- educational development, the present study tries to propose a new method for determining the initial question weights by considering the factors of questions like: difficulty, importance and complexity; and implementing a combined method of PROMETHEE and fuzzy analytic network process using a data mining approach to improve the model-s inputs. The result of the implemented case study proves the development of performance and precision of the proposed model.

Keywords: Assessing students, Analytic network process, Clustering, Data mining, Fuzzy sets, Multi-criteria decision making, and Preference function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1582
5057 Context Modeling and Reasoning Approach in Context-Aware Middleware for URC System

Authors: Chung-Seong Hong, Hyung-Sun Kim, Joonmyun Cho, Hyun Kyu Cho, Hyun-Chan Lee

Abstract:

To realize the vision of ubiquitous computing, it is important to develop a context-aware infrastructure which can help ubiquitous agents, services, and devices become aware of their contexts because such computational entities need to adapt themselves to changing situations. A context-aware infrastructure manages the context model representing contextual information and provides appropriate information. In this paper, we introduce Context-Aware Middleware for URC System (hereafter CAMUS) as a context-aware infrastructure for a network-based intelligent robot system and discuss the ontology-based context modeling and reasoning approach which is used in that infrastructure.

Keywords: CAMUS, Context-Aware, Context Model, Ontology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1922
5056 Difference of Properties on Surface Leakage and Discharge Currents of Porcelain Insulator Material

Authors: Waluyo, Ngapuli I. Sinisuka, Suwarno, Maman A. Djauhari

Abstract:

This paper presents the experimental results of comparison between leakage currents and discharge currents. The leakage currents were obtained on polluted porcelain insulator. Whereas, the discharge currents were obtained on lightly artificial polluted porcelain specimen. The conducted measurements were leakage current or discharge current and applied voltage. The insulator or specimen was in a hermetically sealed chamber, and the current waveforms were analyzed using FFT. The result indicated that the leakage current (LC) on low RH condition the fifth harmonic would be visible, and followed by the seventh harmonic. The insulator had capacitive property. Otherwise, on 99% relative humidity, the fifth harmonic would also be visible, and the phase angle reached up to 12.2 degree. Whereas, on discharge current, the third harmonic would be visible, and followed by fifth harmonic. The third harmonic would increase as pressure reduced. On this condition, the specimen had a non-linear characteristics

Keywords: leakage current, discharge current, third harmonic, fifth harmonic, porcelain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1650
5055 On One Mathematical Model for Filtration of Weakly Compressible Chemical Compound in the Porous Heterogeneous 3D Medium. Part I: Model Construction with the Aid of the Ollendorff Approach

Authors: Sharif E. Guseynov, Jekaterina V. Aleksejeva, Janis S. Rimshans

Abstract:

A filtering problem of almost incompressible liquid chemical compound in the porous inhomogeneous 3D domain is studied. In this work general approaches to the solution of twodimensional filtering problems in ananisotropic, inhomogeneous and multilayered medium are developed, and on the basis of the obtained results mathematical models are constructed (according to Ollendorff method) for studying the certain engineering and technical problem of filtering the almost incompressible liquid chemical compound in the porous inhomogeneous 3D domain. For some of the formulated mathematical problems with additional requirements for the structure of the porous inhomogeneous medium, namely, its isotropy, spatial periodicity of its permeability coefficient, solution algorithms are proposed. Continuation of the current work titled ”On one mathematical model for filtration of weakly compressible chemical compound in the porous heterogeneous 3D medium. Part II: Determination of the reference directions of anisotropy and permeabilities on these directions” will be prepared in the shortest terms by the authors.

Keywords: Porous media, filtering, permeability, elliptic PDE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1755
5054 Optimization of the Input Layer Structure for Feed-Forward Narx Neural Networks

Authors: Zongyan Li, Matt Best

Abstract:

This paper presents an optimization method for reducing the number of input channels and the complexity of the feed-forward NARX neural network (NN) without compromising the accuracy of the NN model. By utilizing the correlation analysis method, the most significant regressors are selected to form the input layer of the NN structure. An application of vehicle dynamic model identification is also presented in this paper to demonstrate the optimization technique and the optimal input layer structure and the optimal number of neurons for the neural network is investigated.

Keywords: Correlation analysis, F-ratio, Levenberg-Marquardt, MSE, NARX, neural network, optimisation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2191
5053 Puff Noise Detection and Cancellation for Robust Speech Recognition

Authors: Sangjun Park, Jungpyo Hong, Byung-Ok Kang, Yun-keun Lee, Minsoo Hahn

Abstract:

In this paper, an algorithm for detecting and attenuating puff noises frequently generated under the mobile environment is proposed. As a baseline system, puff detection system is designed based on Gaussian Mixture Model (GMM), and 39th Mel Frequency Cepstral Coefficient (MFCC) is extracted as feature parameters. To improve the detection performance, effective acoustic features for puff detection are proposed. In addition, detected puff intervals are attenuated by high-pass filtering. The speech recognition rate was measured for evaluation and confusion matrix and ROC curve are used to confirm the validity of the proposed system.

Keywords: Gaussian mixture model, puff detection and cancellation, speech enhancement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2234
5052 Metabolic Analysis of Fibroblast Conditioned Media and Comparison with Theoretical Modeling

Authors: Priyanka Gupta, Paul Verma, Kerry Hourigan, Jayesh Bellare, Sameer Jadhav

Abstract:

Understanding the consumption and production of various metabolites of fibroblast conditioned media is needed for its proper and optimized use in expansion of pluripotent stem cells. For this purpose, we have used the HPLC method to analyse the consumption of glucose and the production of lactate over time by mouse embryonic fibroblasts. The experimental data have also been compared with mathematical model fits. 0.025 moles of lactate was produced after 72 hrs while the glucose concentration decreased from 0.017 moles to 0.011 moles. The mathematical model was able to predict the trends of glucose consumption and lactate production.

Keywords: Conditioned media, HPLC, metabolite analysis, mouse embryonic fibroblast.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2615
5051 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model

Authors: Yepeng Cheng, Yasuhiko Morimoto

Abstract:

Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.

Keywords: Customer value, Huff's Gravity Model, POS, retailer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 613
5050 On-line Identification of Continuous-time Hammerstein Systems via RBF Networks and Immune Algorithm

Authors: Tomohiro Hachino, Kengo Nagatomo, Hitoshi Takata

Abstract:

This paper deals with an on-line identification method of continuous-time Hammerstein systems by using the radial basis function (RBF) networks and immune algorithm (IA). An unknown nonlinear static part to be estimated is approximately represented by the RBF network. The IA is efficiently combined with the recursive least-squares (RLS) method. The objective function for the identification is regarded as the antigen. The candidates of the RBF parameters such as the centers and widths are coded into binary bit strings as the antibodies and searched by the IA. On the other hand, the candidates of both the weighting parameters of the RBF network and the system parameters of the linear dynamic part are updated by the RLS method. Simulation results are shown to illustrate the proposed method.

Keywords: Continuous-time System, Hammerstein System, OnlineIdentification, Immune Algorithm, RBF network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1362
5049 Low Light Image Enhancement with Multi-Stage Interconnected Autoencoders Integration in Pix-to-Pix GAN

Authors: Muhammad Atif, Cang Yan

Abstract:

The enhancement of low-light images is a significant area of study aimed at enhancing the quality of captured images in challenging lighting environments. Recently, methods based on Convolutional Neural Networks (CNN) have gained prominence as they offer state-of-the-art performance. However, many approaches based on CNN rely on increasing the size and complexity of the neural network. In this study, we propose an alternative method for improving low-light images using an Autoencoders-based multiscale knowledge transfer model. Our method leverages the power of three autoencoders, where the encoders of the first two autoencoders are directly connected to the decoder of the third autoencoder. Additionally, the decoder of the first two autoencoders is connected to the encoder of the third autoencoder. This architecture enables effective knowledge transfer, allowing the third autoencoder to learn and benefit from the enhanced knowledge extracted by the first two autoencoders. We further integrate the proposed model into the Pix-to-Pix GAN framework. By integrating our proposed model as the generator in the GAN framework, we aim to produce enhanced images that not only exhibit improved visual quality but also possess a more authentic and realistic appearance. These experimental results, both qualitative and quantitative, show that our method is better than the state-of-the-art methodologies.

Keywords: Low light image enhancement, deep learning, convolutional neural network, image processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41
5048 Parametric Cost Estimating Relationships for Design Effort Estimation

Authors: Adil Salam, Nadia Bhuiyan, Gerard J. Gouw

Abstract:

The Canadian aerospace industry faces many challenges. One of them is the difficulty in estimating costs. In particular, the design effort required in a project impacts resource requirements and lead-time, and consequently the final cost. This paper presents the findings of a case study conducted for recognized global leader in the design and manufacturing of aircraft engines. The study models parametric cost estimation relationships to estimate the design effort of integrated blade-rotor low-pressure compressor fans. Several effort drivers are selected to model the relationship. Comparative analyses of three types of models are conducted. The model with the best accuracy and significance in design estimation is retained.

Keywords: Effort estimation, design, aerospace.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2579
5047 A Comparison of Recent Methods for Solving a Model 1D Convection Diffusion Equation

Authors: Ashvin Gopaul, Jayrani Cheeneebash, Kamleshsing Baurhoo

Abstract:

In this paper we study some numerical methods to solve a model one-dimensional convection–diffusion equation. The semi-discretisation of the space variable results into a system of ordinary differential equations and the solution of the latter involves the evaluation of a matrix exponent. Since the calculation of this term is computationally expensive, we study some methods based on Krylov subspace and on Restrictive Taylor series approximation respectively. We also consider the Chebyshev Pseudospectral collocation method to do the spatial discretisation and we present the numerical solution obtained by these methods.

Keywords: Chebyshev Pseudospectral collocation method, convection-diffusion equation, restrictive Taylor approximation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1680
5046 Improvement of Soft Clay Using Floating Cement Dust-Lime Columns

Authors: Adel Belal, Sameh Aboelsoud, Mohy Elmashad, Mohammed Abdelmonem

Abstract:

The two main criteria that control the design and performance of footings are bearing capacity and settlement of soil. In soft soils, the construction of buildings, storage tanks, warehouse, etc. on weak soils usually involves excessive settlement problems. To solve bearing capacity or reduce settlement problems, soil improvement may be considered by using different techniques, including encased cement dust–lime columns. The proposed research studies the effect of adding floating encased cement dust and lime mix columns to soft clay on the clay-bearing capacity. Four experimental tests were carried out. Columns diameters of 3.0 cm, 4.0 cm, and 5.0 cm and columns length of 60% of the clay layer thickness were used. Numerical model was constructed and verified using commercial finite element package (PLAXIS 2D, V8.5). The verified model was used to study the effect of distributing columns around the footing at different distances. The study showed that the floating cement dust lime columns enhanced the clay-bearing capacity with 262%. The numerical model showed that the columns around the footing have a limit effect on the clay improvement.

Keywords: Bearing capacity, cement dust – lime columns, ground improvement, soft clay.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1116
5045 Methodology to Assess the Circularity of Industrial Processes

Authors: B. F. Oliveira, T. I. Gonçalves, M. M. Sousa, S. M. Pimenta, O. F. Ramalho, J. B. Cruz, F. V. Barbosa

Abstract:

The EU Circular Economy action plan, launched in 2020, is one of the major initiatives to promote the transition into a more sustainable industry. The circular economy is a popular concept used by many companies nowadays. Some industries are better forwarded to this reality than others, and the tannery industry is a sector that needs more attention due to its strong environmental impact caused by its dimension, intensive resources consumption, lack of recyclability, and second use of its products, as well as the industrial effluents generated by the manufacturing processes. For these reasons, the zero-waste goal and the European objectives are further being achieved. In this context, a need arises to provide an effective methodology that allows to determine the level of circularity of tannery companies. Regarding the complexity of the circular economy concept, few factories have a specialist in sustainability to assess the company’s circularity or have the ability to implement circular strategies that could benefit the manufacturing processes. Although there are several methodologies to assess circularity in specific industrial sectors, there is not an easy go-to methodology applied in factories aiming for cleaner production. Therefore, a straightforward methodology to assess the level of circularity, in this case of a tannery industry, is presented and discussed in this work, allowing any company to measure the impact of its activities. The methodology developed consists in calculating the Overall Circular Index (OCI) by evaluating the circularity of four key areas -energy, material, economy and social- in a specific factory. The index is a value between 0 and 1, where 0 means a linear economy, and 1 is a complete circular economy. Each key area has a sub-index, obtained through key performance indicators (KPIs) regarding each theme, and the OCI reflects the average of the four sub-indexes. Some fieldwork in the appointed company was required in order to obtain all the necessary data. By having separate sub-indexes, one can observe which areas are more linear than others. Thus, it is possible to work on the most critical areas by implementing strategies to increase the OCI. After these strategies are implemented, the OCI is recalculated to check the improvements made and any other changes in the remaining sub-indexes. As such, the methodology in discussion works through continuous improvement, constantly reevaluating and improving the circularity of the factory. The methodology is also flexible enough to be implemented in any industrial sector by adapting the KPIs. This methodology was implemented in a selected Portuguese small and medium-sized enterprises (SME) tannery industry and proved to be a relevant tool to measure the circularity level of the factory. It was witnessed that it is easier for non-specialists to evaluate circularity and identify possible solutions to increase its value, as well as learn how one action can impact their environment. In the end, energetic and environmental inefficiencies were identified and corrected, increasing the sustainability and circularity of the company. Through this work, important contributions were provided, helping the Portuguese SMEs to achieve the European and UN 2030 sustainable goals.

Keywords: Circular economy, circularity index, sustainability, tannery industry, zero-waste.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 98
5044 A Novel Web Metric for the Evaluation of Internet Trends

Authors: Radek Malinský, Ivan Jelínek

Abstract:

Web 2.0 (social networking, blogging and online forums) can serve as a data source for social science research because it contains vast amount of information from many different users. The volume of that information has been growing at a very high rate and becoming a network of heterogeneous data; this makes things difficult to find and is therefore not almost useful. We have proposed a novel theoretical model for gathering and processing data from Web 2.0, which would reflect semantic content of web pages in better way. This article deals with the analysis part of the model and its usage for content analysis of blogs. The introductory part of the article describes methodology for the gathering and processing data from blogs. The next part of the article is focused on the evaluation and content analysis of blogs, which write about specific trend.

Keywords: Blog, Sentiment Analysis, Web 2.0, Webometrics

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3544
5043 Modeling of the Process Parameters using Soft Computing Techniques

Authors: Miodrag T. Manić, Dejan I. Tanikić, Miloš S. Stojković, Dalibor M. ðenadić

Abstract:

The design of technological procedures for manufacturing certain products demands the definition and optimization of technological process parameters. Their determination depends on the model of the process itself and its complexity. Certain processes do not have an adequate mathematical model, thus they are modeled using heuristic methods. First part of this paper presents a state of the art of using soft computing techniques in manufacturing processes from the perspective of applicability in modern CAx systems. Methods of artificial intelligence which can be used for this purpose are analyzed. The second part of this paper shows some of the developed models of certain processes, as well as their applicability in the actual calculation of parameters of some technological processes within the design system from the viewpoint of productivity.

Keywords: fuzzy logic, manufacturing, neural networks

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1910
5042 Knowledge Discovery from Production Databases for Hierarchical Process Control

Authors: Pavol Tanuska, Pavel Vazan, Michal Kebisek, Dominika Jurovata

Abstract:

The paper gives the results of the project that was oriented on the usage of knowledge discoveries from production systems for needs of the hierarchical process control. One of the main project goals was the proposal of knowledge discovery model for process control. Specifics data mining methods and techniques was used for defined problems of the process control. The gained knowledge was used on the real production system thus the proposed solution has been verified. The paper documents how is possible to apply the new discovery knowledge to use in the real hierarchical process control. There are specified the opportunities for application of the proposed knowledge discovery model for hierarchical process control.

Keywords: Hierarchical process control, knowledge discovery from databases, neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1776
5041 Model Inversion of a Two Degrees of Freedom Linearized PUMA from Bicausal Bond Graphs

Authors: Gilberto Gonzalez-A, Ignacio Rodríguez- A., Dunia Nuñez-P

Abstract:

A bond graph model of a two degrees of freedom PUMA is described. System inversion gives the system input required to generate a given system output. In order to get the system inversion of the PUMA manipulator, a linearization of the nonlinear bond graph is obtained. Hence, the bicausality of the linearized bond graph of the PUMA manipulator is applied. Thus, the bicausal bond graph provides a systematic way of generating the equations of the system inversion. Simulation results to verify the calculated input for a given output are shown.

Keywords: Bond graph, system inversion, bicausality, PUMA manipulator

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2012
5040 Simultaneous HPAM/SDS Injection in Heterogeneous/Layered Models

Authors: M. H. Sedaghat, A. Zamani, S. Morshedi, R. Janamiri, M. Safdari, I. Mahdavi, A. Hosseini, A. Hatampour

Abstract:

Although lots of experiments have been done in enhanced oil recovery, the number of experiments which consider the effects of local and global heterogeneity on efficiency of enhanced oil recovery based on the polymer-surfactant flooding is low and rarely done. In this research, we have done numerous experiments of water flooding and polymer-surfactant flooding on a five spot glass micromodel in different conditions such as different positions of layers. In these experiments, five different micromodels with three different pore structures are designed. Three models with different layer orientation, one homogenous model and one heterogeneous model are designed. In order to import the effect of heterogeneity of porous media, three types of pore structures are distributed accidentally and with equal ratio throughout heterogeneous micromodel network according to random normal distribution. The results show that maximum EOR recovery factor will happen in a situation where the layers are orthogonal to the path of mainstream and the minimum EOR recovery factor will happen in a situation where the model is heterogeneous. This experiments show that in polymer-surfactant flooding, with increase of angles of layers the EOR recovery factor will increase and this recovery factor is strongly affected by local heterogeneity around the injection zone.

Keywords: Layered Reservoir, Micromodel, Local Heterogeneity, Polymer-Surfactant Flooding, Enhanced Oil Recovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2219
5039 Digital Hypertexts vs. Traditional Books: An Inquiry into Non-Linearity

Authors: Federica Fornaciari

Abstract:

The current study begins with an awareness that today-s media environment is characterized by technological development and a new way of reading caused by the introduction of the Internet. The researcher conducted a meta analysis framed within Technological Determinism to investigate the process of hypertext reading, its differences from linear reading and the effects such differences can have on people-s ways of mentally structuring their world. The relationship between literacy and the comprehension achieved by reading hypertexts is also investigated. The results show hypertexts are not always user friendly. People experience hyperlinks as interruptions that distract their attention generating comprehension and disorientation. On one hand hypertextual jumping reading generates interruptions that finally make people lose their concentration. On the other hand hypertexts fascinate people who would rather read a document in such a format even though the outcome is often frustrating and affects their ability to elaborate and retain information.

Keywords: Hypertext reading, Internet, non-linearity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1493
5038 Radiation Effect on MHD Casson Fluid Flow over a Power-Law Stretching Sheet with Chemical Reaction

Authors: Motahar Reza, Rajni Chahal, Neha Sharma

Abstract:

This article addresses the boundary layer flow and heat transfer of Casson fluid over a nonlinearly permeable stretching surface with chemical reaction in the presence of variable magnetic field. The effect of thermal radiation is considered to control the rate of heat transfer at the surface. Using similarity transformations, the governing partial differential equations of this problem are reduced into a set of non-linear ordinary differential equations which are solved by finite difference method. It is observed that the velocity at fixed point decreases with increasing the nonlinear stretching parameter but the temperature increases with nonlinear stretching parameter.

Keywords: Boundary layer flow, nonlinear stretching, Casson fluid, heat transfer, radiation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1790