Search results for: Dirichlet process mixture model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29126

Search results for: Dirichlet process mixture model

28166 Bridging the Gap between M and E, and KM: Towards the Integration of Evidence-Based Information and Policy Decision-Making

Authors: Xueqing Ivy Chen, Christo De Coning

Abstract:

It is clear from practice that a gap exists between Result-Based Monitoring and Evaluation (RBME) as a discipline, and Knowledge Management (KM) on the other hand. Whereas various government departments have institutionalised these functions, KM and M&E has functioned in isolation from each other in a practical sense in the public sector. It’s therefore necessary to explore the relationship between KM and M&E and the necessity for integration, so that a convergence of these disciplines can be established. An integration of KM and M&E will lead to integration and improvement of evidence-based information and policy decision-making. M&E and KM process models are available but the complementarity between specific process steps of these process models are not exploited. A need exists to clarify the relationships between these functions in order to ensure evidence based information and policy decision-making. This paper will depart from the well-known policy process models, such as the generic model and consider recent on the interface between policy, M&E and KM.

Keywords: result-based monitoring and evaluation, RBME, knowledge management, KM, evident based decision making, public policy, information systems, institutional arrangement

Procedia PDF Downloads 149
28165 An Axiomatic Model for Development of the Allocated Architecture in Systems Engineering Process

Authors: Amir Sharahi, Reza Tehrani, Ali Mollajan

Abstract:

The final step to complete the “Analytical Systems Engineering Process” is the “Allocated Architecture” in which all Functional Requirements (FRs) of an engineering system must be allocated into their corresponding Physical Components (PCs). At this step, any design for developing the system’s allocated architecture in which no clear pattern of assigning the exclusive “responsibility” of each PC for fulfilling the allocated FR(s) can be found is considered a poor design that may cause difficulties in determining the specific PC(s) which has (have) failed to satisfy a given FR successfully. The present study utilizes the Axiomatic Design method principles to mathematically address this problem and establishes an “Axiomatic Model” as a solution for reaching good alternatives for developing the allocated architecture. This study proposes a “loss Function”, as a quantitative criterion to monetarily compare non-ideal designs for developing the allocated architecture and choose the one which imposes relatively lower cost to the system’s stakeholders. For the case-study, we use the existing design of U. S. electricity marketing subsystem, based on data provided by the U.S. Energy Information Administration (EIA). The result for 2012 shows the symptoms of a poor design and ineffectiveness due to coupling among the FRs of this subsystem.

Keywords: allocated architecture, analytical systems engineering process, functional requirements (FRs), physical components (PCs), responsibility of a physical component, system’s stakeholders

Procedia PDF Downloads 401
28164 Semantic Platform for Adaptive and Collaborative e-Learning

Authors: Massra M. Sabeima, Myriam lamolle, Mohamedade Farouk Nanne

Abstract:

Adapting the learning resources of an e-learning system to the characteristics of the learners is an important aspect to consider when designing an adaptive e-learning system. However, this adaptation is not a simple process; it requires the extraction, analysis, and modeling of user information. This implies a good representation of the user's profile, which is the backbone of the adaptation process. Moreover, during the e-learning process, collaboration with similar users (same geographic province or knowledge context) is important. Productive collaboration motivates users to continue or not abandon the course and increases the assimilation of learning objects. The contribution of this work is the following: we propose an adaptive e-learning semantic platform to recommend learning resources to learners, using ontology to model the user profile and the course content, furthermore an implementation of a multi-agent system able to progressively generate the learning graph (taking into account the user's progress, and the changes that occur) for each user during the learning process, and to synchronize the users who collaborate on a learning object.

Keywords: adaptative learning, collaboration, multi-agent, ontology

Procedia PDF Downloads 170
28163 On Transferring of Transient Signals along Hollow Waveguide

Authors: E. Eroglu, S. Semsit, E. Sener, U.S. Sener

Abstract:

In Electromagnetics, there are three canonical boundary value problem with given initial conditions for the electromagnetic field sought, namely: Cavity Problem, Waveguide Problem, and External Problem. The Cavity Problem and Waveguide Problem were rigorously studied and new results were arised at original works in the past decades. In based on studies of an analytical time domain method Evolutionary Approach to Electromagnetics (EAE), electromagnetic field strength vectors produced by a time dependent source function are sought. The fields are took place in L2 Hilbert space. The source function that performs signal transferring, energy and surplus of energy has been demonstrated with all clarity. Depth of the method and ease of applications are emerged needs of gathering obtained results. Main discussion is about perfect electric conductor and hollow waveguide. Even if well studied time-domain modes problems are mentioned, specifically, the modes which have a hollow (i.e., medium-free) cross-section domain are considered.

Keywords: evolutionary approach to electromagnetics, time-domain waveguide mode, Neumann problem, Dirichlet boundary value problem, Klein-Gordon

Procedia PDF Downloads 324
28162 Public Participation as a Social Inclusion Tool in the Urban Planning Process: A Case Study of Abuja, Nigeria

Authors: Nwachi Prosper Louis, Cynthia Ogonna Ikesee

Abstract:

The urban planning system of cities varies by country, but in general, it is an instrument for establishing long-term sustainable frameworks and plans for social, institutional and economic development. There is limited knowledge, development, and implementation of effective and sustainable urban planning structures and plans that encourage social inclusion in most communities. This has led to social, economic and environmental deficiencies resulting in community isolation and segregation in class, ethnicity, and race. Encouraging public participation in the urban planning process is one of the instruments that cities can utilise to achieve better social inclusion outcomes. This paper explores how public participation can be used as a social inclusion tool in the urban planning process to achieve better outcomes in Abuja urban planning system. The purpose of this study is to investigate the effectiveness of this approach. Also, a conceptual model was developed which evaluates the relationship between public participation and social inclusion outcomes in the urban planning process. It was seen that every community has its peculiar way of life and challenges, and an understanding of these social societal needs is paramount in the urban planning process. Therefore, the involvement of the public in identifying their needs, selecting priorities and identifying strategies offer better chances for developing solutions that are sustainable, feasible and implementable.

Keywords: public participation, social inclusion, urban planning, urban planning process

Procedia PDF Downloads 192
28161 Facial Emotion Recognition Using Deep Learning

Authors: Ashutosh Mishra, Nikhil Goyal

Abstract:

A 3D facial emotion recognition model based on deep learning is proposed in this paper. Two convolution layers and a pooling layer are employed in the deep learning architecture. After the convolution process, the pooling is finished. The probabilities for various classes of human faces are calculated using the sigmoid activation function. To verify the efficiency of deep learning-based systems, a set of faces. The Kaggle dataset is used to verify the accuracy of a deep learning-based face recognition model. The model's accuracy is about 65 percent, which is lower than that of other facial expression recognition techniques. Despite significant gains in representation precision due to the nonlinearity of profound image representations.

Keywords: facial recognition, computational intelligence, convolutional neural network, depth map

Procedia PDF Downloads 225
28160 Investigation and Comprehensive Benefit Analysis of 11 Typical Polar-Based Agroforestry Models Based on Analytic Hierarchy Process in Anhui Province, Eastern China

Authors: Zhihua Cao, Hongfei Zhao, Zhongneng Wu

Abstract:

The development of polar-based agroforestry was necessary due to the influence of the timber market environment in China, which can promote the coordinated development of forestry and agriculture, and gain remarkable ecological, economic and social benefits. The main agroforestry models of the main poplar planting area in Huaibei plain and along the Yangtze River plain were carried out. 11 typical management models of poplar were selected to sum up: pure poplar forest, poplar-rape-soybean, poplar-wheat-soybean, poplar-rape-cotton, poplar-wheat, poplar-chicken, poplar-duck, poplar-sheep, poplar-Agaricus blazei, poplar-oil peony, poplar-fish, represented by M0-M10, respectively. 12 indexes related with economic, ecological and social benefits (annual average cost, net income, ratio of output to investment, payback period of investment, land utilization ratio, utilization ratio of light energy, improvement and system stability of ecological and production environment, product richness, labor capacity, cultural quality of labor force, sustainability) were screened out to carry on the comprehensive evaluation and analysis to 11 kinds of typical agroforestry models based on analytic hierarchy process (AHP). The results showed that the economic benefit of each agroforestry model was in the order of: M8 > M6 > M9 > M7 > M5 > M10 > M4 > M1 > M2 > M3 > M0. The economic benefit of poplar-A. blazei model was the highest (332, 800 RMB / hm²), followed by poplar-duck and poplar-oil peony model (109, 820RMB /hm², 5, 7226 RMB /hm²). The order of comprehensive benefit was: M8 > M4 > M9 > M6 > M1 > M2 > M3 > M7 > M5 > M10 > M0. The economic benefit and comprehensive benefit of each agroforestry model were higher than that of pure poplar forest. The comprehensive benefit of poplar-A. blazei model was the highest, and that of poplar-wheat model ranked second, while its economic benefit was not high. Next were poplar-oil peony and poplar-duck models. It was suggested that the model of poplar-wheat should be adopted in the plain along the Yangtze River, and the whole cycle mode of poplar-grain, popalr-A. blazei, or poplar-oil peony should be adopted in Huaibei plain, northern Anhui. Furthermore, wheat, rape, and soybean are the main crops before the stand was closed; the agroforestry model of edible fungus or Chinese herbal medicine can be carried out when the stand was closed in order to maximize the comprehensive benefit. The purpose of this paper is to provide a reference for forest farmers in the selection of poplar agroforestry model in the future and to provide the basic data for the sustainable and efficient study of poplar agroforestry in Anhui province, eastern China.

Keywords: agroforestry, analytic hierarchy process (AHP), comprehensive benefit, model, poplar

Procedia PDF Downloads 160
28159 GRABTAXI: A Taxi Revolution in Thailand

Authors: Danuvasin Charoen

Abstract:

The study investigates the business process and business model of GRABTAXI. The paper also discusses how the company implemented strategies to gain competitive advantages. The data is derived from the analysis of secondary data and the in-depth interviews among staffs, taxi drivers, and key customers. The findings indicated that the company’s competitive advantages come from being the first mover, emphasising on the ease of use and tangible benefits of application, and using network effect strategy.

Keywords: taxi, mobile application, innovative business model, Thailand

Procedia PDF Downloads 296
28158 Model Development for Real-Time Human Sitting Posture Detection Using a Camera

Authors: Jheanel E. Estrada, Larry A. Vea

Abstract:

This study developed model to detect proper/improper sitting posture using the built in web camera which detects the upper body points’ location and distances (chin, manubrium and acromion process). It also established relationships of human body frames and proper sitting posture. The models were developed by training some well-known classifiers such as KNN, SVM, MLP, and Decision Tree using the data collected from 60 students of different body frames. Decision Tree classifier demonstrated the most promising model performance with an accuracy of 95.35% and a kappa of 0.907 for head and shoulder posture. Results also showed that there were relationships between body frame and posture through Body Mass Index.

Keywords: posture, spinal points, gyroscope, image processing, ergonomics

Procedia PDF Downloads 325
28157 The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data

Authors: Y. Sunecher, N. Mamode Khan, V. Jowaheer

Abstract:

This paper considers the modelling of a non-stationary bivariate integer-valued autoregressive moving average of order one (BINARMA(1,1)) with correlated Poisson innovations. The BINARMA(1,1) model is specified using the binomial thinning operator and by assuming that the cross-correlation between the two series is induced by the innovation terms only. Based on these assumptions, the non-stationary marginal and joint moments of the BINARMA(1,1) are derived iteratively by using some initial stationary moments. As regards to the estimation of parameters of the proposed model, the conditional maximum likelihood (CML) estimation method is derived based on thinning and convolution properties. The forecasting equations of the BINARMA(1,1) model are also derived. A simulation study is also proposed where BINARMA(1,1) count data are generated using a multivariate Poisson R code for the innovation terms. The performance of the BINARMA(1,1) model is then assessed through a simulation experiment and the mean estimates of the model parameters obtained are all efficient, based on their standard errors. The proposed model is then used to analyse a real-life accident data on the motorway in Mauritius, based on some covariates: policemen, daily patrol, speed cameras, traffic lights and roundabouts. The BINARMA(1,1) model is applied on the accident data and the CML estimates clearly indicate a significant impact of the covariates on the number of accidents on the motorway in Mauritius. The forecasting equations also provide reliable one-step ahead forecasts.

Keywords: non-stationary, BINARMA(1, 1) model, Poisson innovations, conditional maximum likelihood, CML

Procedia PDF Downloads 125
28156 Effectiveness Factor for Non-Catalytic Gas-Solid Pyrolysis Reaction for Biomass Pellet Under Power Law Kinetics

Authors: Haseen Siddiqui, Sanjay M. Mahajani

Abstract:

Various important reactions in chemical and metallurgical industries fall in the category of gas-solid reactions. These reactions can be categorized as catalytic and non-catalytic gas-solid reactions. In gas-solid reaction systems, heat and mass transfer limitations put an appreciable influence on the rate of the reaction. The consequences can be unavoidable for overlooking such effects while collecting the reaction rate data for the design of the reactor. Pyrolysis reaction comes in this category that involves the production of gases due to the interaction of heat and solid substance. Pyrolysis is also an important step in the gasification process and therefore, the gasification reactivity majorly influenced by the pyrolysis process that produces the char, as a feed for the gasification process. Therefore, in the present study, a non-isothermal transient 1-D model is developed for a single biomass pellet to investigate the effect of heat and mass transfer limitations on the rate of pyrolysis reaction. The obtained set of partial differential equations are firstly discretized using the concept of ‘method of lines’ to obtain a set of ordinary differential equation with respect to time. These equations are solved, then, using MATLAB ode solver ode15s. The model is capable of incorporating structural changes, porosity variation, variation in various thermal properties and various pellet shapes. The model is used to analyze the effectiveness factor for different values of Lewis number and heat of reaction (G factor). Lewis number includes the effect of thermal conductivity of the solid pellet. Higher the Lewis number, the higher will be the thermal conductivity of the solid. The effectiveness factor was found to be decreasing with decreasing Lewis number due to the fact that smaller Lewis numbers retard the rate of heat transfer inside the pellet owing to a lower rate of pyrolysis reaction. G factor includes the effect of the heat of reaction. Since the pyrolysis reaction is endothermic in nature, the G factor takes negative values. The more the negative value higher will be endothermic nature of the pyrolysis reaction. The effectiveness factor was found to be decreasing with more negative values of the G factor. This behavior can be attributed to the fact that more negative value of G factor would result in more energy consumption by the reaction owing to a larger temperature gradient inside the pellet. Further, the analytical expressions are also derived for gas and solid concentrations and effectiveness factor for two limiting cases of the general model developed. The two limiting cases of the model are categorized as the homogeneous model and unreacted shrinking core model.

Keywords: effectiveness factor, G-factor, homogeneous model, lewis number, non-catalytic, shrinking core model

Procedia PDF Downloads 130
28155 Hybrid Direct Numerical Simulation and Large Eddy Simulating Wall Models Approach for the Analysis of Turbulence Entropy

Authors: Samuel Ahamefula

Abstract:

Turbulent motion is a highly nonlinear and complex phenomenon, and its modelling is still very challenging. In this study, we developed a hybrid computational approach to accurately simulate fluid turbulence phenomenon. The focus is coupling and transitioning between Direct Numerical Simulation (DNS) and Large Eddy Simulating Wall Models (LES-WM) regions. In the framework, high-order fidelity fluid dynamical methods are utilized to simulate the unsteady compressible Navier-Stokes equations in the Eulerian format on the unstructured moving grids. The coupling and transitioning of DNS and LES-WM are conducted through the linearly staggered Dirichlet-Neumann coupling scheme. The high-fidelity framework is verified and validated based on namely, DNS ability for capture full range of turbulent scales, giving accurate results and LES-WM efficiency in simulating near-wall turbulent boundary layer by using wall models.

Keywords: computational methods, turbulence modelling, turbulence entropy, navier-stokes equations

Procedia PDF Downloads 96
28154 Selection of Social and Sustainability Criteria for Public Investment Project Evaluation in Developing Countries

Authors: Pintip Vajarothai, Saad Al-Jibouri, Johannes I. M. Halman

Abstract:

Public investment projects are primarily aimed at achieving development strategies to increase national economies of scale and overall improvement in a country. However, experience shows that public projects, particularly in developing countries, struggle or fail to fulfill the immediate needs of local communities. In many cases, the reason for that is that projects are selected in a subjective manner and that a major part of the problem is related to the evaluation criteria and techniques used. The evaluation process is often based on a broad strategic economic effects rather than real benefits of projects to society or on the various needs from different levels (e.g. national, regional, local) and conditions (e.g. long-term and short-term requirements). In this paper, an extensive literature review of the types of criteria used in the past by various researchers in project evaluation and selection process is carried out and the effectiveness of such criteria and techniques is discussed. The paper proposes substitute social and project sustainability criteria to improve the conditions of local people and in particular the disadvantaged groups of the communities. Furthermore, it puts forward a way for modelling the interaction between the selected criteria and the achievement of the social goals of the affected community groups. The described work is part of developing a broader decision model for public investment project selection by integrating various aspects and techniques into a practical methodology. The paper uses Thailand as a case to review what and how the various evaluation techniques are currently used and how to improve the project evaluation and selection process related to social and sustainability issues in the country. The paper also uses an example to demonstrates how to test the feasibility of various criteria and how to model the interaction between projects and communities. The proposed model could be applied to other developing and developed countries in the project evaluation and selection process to improve its effectiveness in the long run.

Keywords: evaluation criteria, developing countries, public investment, project selection methodology

Procedia PDF Downloads 270
28153 Revolutionizing Manufacturing: Embracing Additive Manufacturing with Eggshell Polylactide (PLA) Polymer

Authors: Choy Sonny Yip Hong

Abstract:

This abstract presents an exploration into the creation of a sustainable bio-polymer compound for additive manufacturing, specifically 3D printing, with a focus on eggshells and polylactide (PLA) polymer. The project initially conducted experiments using a variety of food by-products to create bio-polymers, and promising results were obtained when combining eggshells with PLA polymer. The research journey involved precise measurements, drying of PLA to remove moisture, and the utilization of a filament-making machine to produce 3D printable filaments. The project began with exploratory research and experiments, testing various combinations of food by-products to create bio-polymers. After careful evaluation, it was discovered that eggshells and PLA polymer produced promising results. The initial mixing of the two materials involved heating them just above the melting point. To make the compound 3D printable, the research focused on finding the optimal formulation and production process. The process started with precise measurements of the PLA and eggshell materials. The PLA was placed in a heating oven to remove any absorbed moisture. Handmade testing samples were created to guide the planning for 3D-printed versions. The scrap PLA was recycled and ground into a powdered state. The drying process involved gradual moisture evaporation, which required several hours. The PLA and eggshell materials were then placed into the hopper of a filament-making machine. The machine's four heating elements controlled the temperature of the melted compound mixture, allowing for optimal filament production with accurate and consistent thickness. The filament-making machine extruded the compound, producing filament that could be wound on a wheel. During the testing phase, trials were conducted with different percentages of eggshell in the PLA mixture, including a high percentage (20%). However, poor extrusion results were observed for high eggshell percentage mixtures. Samples were created, and continuous improvement and optimization were pursued to achieve filaments with good performance. To test the 3D printability of the DIY filament, a 3D printer was utilized, set to print the DIY filament smoothly and consistently. Samples were printed and mechanically tested using a universal testing machine to determine their mechanical properties. This testing process allowed for the evaluation of the filament's performance and suitability for additive manufacturing applications. In conclusion, the project explores the creation of a sustainable bio-polymer compound using eggshells and PLA polymer for 3D printing. The research journey involved precise measurements, drying of PLA, and the utilization of a filament-making machine to produce 3D printable filaments. Continuous improvement and optimization were pursued to achieve filaments with good performance. The project's findings contribute to the advancement of additive manufacturing, offering opportunities for design innovation, carbon footprint reduction, supply chain optimization, and collaborative potential. The utilization of eggshell PLA polymer in additive manufacturing has the potential to revolutionize the manufacturing industry, providing a sustainable alternative and enabling the production of intricate and customized products.

Keywords: additive manufacturing, 3D printing, eggshell PLA polymer, design innovation, carbon footprint reduction, supply chain optimization, collaborative potential

Procedia PDF Downloads 69
28152 The Volume–Volatility Relationship Conditional to Market Efficiency

Authors: Massimiliano Frezza, Sergio Bianchi, Augusto Pianese

Abstract:

The relation between stock price volatility and trading volume represents a controversial issue which has received a remarkable attention over the past decades. In fact, an extensive literature shows a positive relation between price volatility and trading volume in the financial markets, but the causal relationship which originates such association is an open question, from both a theoretical and empirical point of view. In this regard, various models, which can be considered as complementary rather than competitive, have been introduced to explain this relationship. They include the long debated Mixture of Distributions Hypothesis (MDH); the Sequential Arrival of Information Hypothesis (SAIH); the Dispersion of Beliefs Hypothesis (DBH); the Noise Trader Hypothesis (NTH). In this work, we analyze whether stock market efficiency can explain the diversity of results achieved during the years. For this purpose, we propose an alternative measure of market efficiency, based on the pointwise regularity of a stochastic process, which is the Hurst–H¨older dynamic exponent. In particular, we model the stock market by means of the multifractional Brownian motion (mBm) that displays the property of a time-changing regularity. Mostly, such models have in common the fact that they locally behave as a fractional Brownian motion, in the sense that their local regularity at time t0 (measured by the local Hurst–H¨older exponent in a neighborhood of t0 equals the exponent of a fractional Brownian motion of parameter H(t0)). Assuming that the stock price follows an mBm, we introduce and theoretically justify the Hurst–H¨older dynamical exponent as a measure of market efficiency. This allows to measure, at any time t, markets’ departures from the martingale property, i.e. from efficiency as stated by the Efficient Market Hypothesis. This approach is applied to financial markets; using data for the SP500 index from 1978 to 2017, on the one hand we find that when efficiency is not accounted for, a positive contemporaneous relationship emerges and is stable over time. Conversely, it disappears as soon as efficiency is taken into account. In particular, this association is more pronounced during time frames of high volatility and tends to disappear when market becomes fully efficient.

Keywords: volume–volatility relationship, efficient market hypothesis, martingale model, Hurst–Hölder exponent

Procedia PDF Downloads 76
28151 Fatigue Life Prediction under Variable Loading Based a Non-Linear Energy Model

Authors: Aid Abdelkrim

Abstract:

A method of fatigue damage accumulation based upon application of energy parameters of the fatigue process is proposed in the paper. Using this model is simple, it has no parameter to be determined, it requires only the knowledge of the curve W–N (W: strain energy density N: number of cycles at failure) determined from the experimental Wöhler curve. To examine the performance of nonlinear models proposed in the estimation of fatigue damage and fatigue life of components under random loading, a batch of specimens made of 6082 T 6 aluminium alloy has been studied and some of the results are reported in the present paper. The paper describes an algorithm and suggests a fatigue cumulative damage model, especially when random loading is considered. This work contains the results of uni-axial random load fatigue tests with different mean and amplitude values performed on 6082T6 aluminium alloy specimens. The proposed model has been formulated to take into account the damage evolution at different load levels and it allows the effect of the loading sequence to be included by means of a recurrence formula derived for multilevel loading, considering complex load sequences. It is concluded that a ‘damaged stress interaction damage rule’ proposed here allows a better fatigue damage prediction than the widely used Palmgren–Miner rule, and a formula derived in random fatigue could be used to predict the fatigue damage and fatigue lifetime very easily. The results obtained by the model are compared with the experimental results and those calculated by the most fatigue damage model used in fatigue (Miner’s model). The comparison shows that the proposed model, presents a good estimation of the experimental results. Moreover, the error is minimized in comparison to the Miner’s model.

Keywords: damage accumulation, energy model, damage indicator, variable loading, random loading

Procedia PDF Downloads 393
28150 Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase

Authors: Antoine Lauvray, Fabien Poulhaon, Pierre Michaud, Pierre Joyot, Emmanuel Duc

Abstract:

Additive Friction Stir Manufacturing (AFSM) is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. Unlike in Friction Stir Welding (FSW) where abundant literature exists and addresses many aspects going from process implementation to characterization and modeling, there are still few research works focusing on AFSM. Therefore, there is still a lack of understanding of the physical phenomena taking place during the process. This research work aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system composed of the tool, the filler material, and the substrate and due to pure friction. Analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes, through numerical modeling followed by experimental validation, to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque, and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.

Keywords: numerical model, additive manufacturing, friction, process

Procedia PDF Downloads 144
28149 Optimal Performance of Plastic Extrusion Process Using Fuzzy Goal Programming

Authors: Abbas Al-Refaie

Abstract:

This study optimized the performance of plastic extrusion process of drip irrigation pipes using fuzzy goal programming. Two main responses were of main interest; roll thickness and hardness. Four main process factors were studied. The L18 array was then used for experimental design. The individual-moving range control charts were used to assess the stability of the process, while the process capability index was used to assess process performance. Confirmation experiments were conducted at the obtained combination of optimal factor setting by fuzzy goal programming. The results revealed that process capability was improved significantly from -1.129 to 0.8148 for roll thickness and from 0.0965 to 0.714 and hardness. Such improvement results in considerable savings in production and quality costs.

Keywords: fuzzy goal programming, extrusion process, process capability, irrigation plastic pipes

Procedia PDF Downloads 264
28148 Creeping Control Strategy for Direct Shift Gearbox Based on the Investigation of Temperature Variation of the Wet Clutch

Authors: Biao Ma, Jikai Liu, Man Chen, Jianpeng Wu, Liyong Wang, Changsong Zheng

Abstract:

Proposing an appropriate control strategy is an effective and practical way to address the overheat problems of the wet multi-plate clutch in Direct Shift Gearbox under the long-time creeping condition. To do so, the temperature variation of the wet multi-plate clutch is investigated firstly by establishing a thermal resistance model for the gearbox cooling system. To calculate the generated heat flux and predict the clutch temperature precisely, the friction torque model is optimized by introducing an improved friction coefficient, which is related to the pressure, the relative speed and the temperature. After that, the heat transfer model and the reasonable friction torque model are employed by the vehicle powertrain model to construct a comprehensive co-simulation model for the Direct Shift Gearbox (DSG) vehicle. A creeping control strategy is then proposed and, to evaluate the vehicle performance, the safety temperature (250 ℃) is particularly adopted as an important metric. During the creeping process, the temperature of two clutches is always under the safety value (250 ℃), which demonstrates the effectiveness of the proposed control strategy in avoiding the thermal failures of clutches.

Keywords: creeping control strategy, direct shift gearbox, temperature variation, wet clutch

Procedia PDF Downloads 130
28147 Modelling of Moisture Loss and Oil Uptake during Deep-Fat Frying of Plantain

Authors: James A. Adeyanju, John O. Olajide, Akinbode A. Adedeji

Abstract:

A predictive mathematical model based on the fundamental principles of mass transfer was developed to simulate the moisture content and oil content during Deep-Fat Frying (DFF) process of dodo. The resulting governing equation, that is, partial differential equation that describes rate of moisture loss and oil uptake was solved numerically using explicit Finite Difference Technique (FDT). Computer codes were written in MATLAB environment for the implementation of FDT at different frying conditions and moisture loss as well as oil uptake simulation during DFF of dodo. Plantain samples were sliced into 5 mm thickness and fried at different frying oil temperatures (150, 160 and 170 ⁰C) for periods varying from 2 to 4 min. The comparison between the predicted results and experimental data for the validation of the model showed reasonable agreement. The correlation coefficients between the predicted and experimental values of moisture and oil transfer models ranging from 0.912 to 0.947 and 0.895 to 0.957, respectively. The predicted results could be further used for the design, control and optimization of deep-fat frying process.

Keywords: frying, moisture loss, modelling, oil uptake

Procedia PDF Downloads 442
28146 Embedding Knowledge Management in Business Process

Authors: Paul Ihuoma Oluikpe

Abstract:

The purpose of this paper is to explore and highlight the process of creating value for strategy management by embedding knowledge management in the business process. Knowledge management can be seen from a three-dimensional perspective of content, connections and competencies. These dimensions can be embedded in the knowledge processes (create, capture, share, and apply) and operationalized within a business process to effectively create a scenario where knowledge can be focused on enabling a process and the process in turn generates outcomes. The application of knowledge management on business processes of organizations is rare and underreported. Few researches have explored this paradigm although researches have tended to reinforce the notion that competitive advantage sits within the internal aspects of the firm. Given this notion, it is surprising that knowledge management research and practice have not focused sufficiently on the business process which is the basic unit of organizational decision implementation. This research serves to generate understanding on applying KM in business process using a large multinational in Sub-Saharan Africa.

Keywords: knowledge management, business process, strategy, multinational

Procedia PDF Downloads 686
28145 Bayesian Structural Identification with Systematic Uncertainty Using Multiple Responses

Authors: André Jesus, Yanjie Zhu, Irwanda Laory

Abstract:

Structural health monitoring is one of the most promising technologies concerning aversion of structural risk and economic savings. Analysts often have to deal with a considerable variety of uncertainties that arise during a monitoring process. Namely the widespread application of numerical models (model-based) is accompanied by a widespread concern about quantifying the uncertainties prevailing in their use. Some of these uncertainties are related with the deterministic nature of the model (code uncertainty) others with the variability of its inputs (parameter uncertainty) and the discrepancy between a model/experiment (systematic uncertainty). The actual process always exhibits a random behaviour (observation error) even when conditions are set identically (residual variation). Bayesian inference assumes that parameters of a model are random variables with an associated PDF, which can be inferred from experimental data. However in many Bayesian methods the determination of systematic uncertainty can be problematic. In this work systematic uncertainty is associated with a discrepancy function. The numerical model and discrepancy function are approximated by Gaussian processes (surrogate model). Finally, to avoid the computational burden of a fully Bayesian approach the parameters that characterise the Gaussian processes were estimated in a four stage process (modular Bayesian approach). The proposed methodology has been successfully applied on fields such as geoscience, biomedics, particle physics but never on the SHM context. This approach considerably reduces the computational burden; although the extent of the considered uncertainties is lower (second order effects are neglected). To successfully identify the considered uncertainties this formulation was extended to consider multiple responses. The efficiency of the algorithm has been tested on a small scale aluminium bridge structure, subjected to a thermal expansion due to infrared heaters. Comparison of its performance with responses measured at different points of the structure and associated degrees of identifiability is also carried out. A numerical FEM model of the structure was developed and the stiffness from its supports is considered as a parameter to calibrate. Results show that the modular Bayesian approach performed best when responses of the same type had the lowest spatial correlation. Based on previous literature, using different types of responses (strain, acceleration, and displacement) should also improve the identifiability problem. Uncertainties due to parametric variability, observation error, residual variability, code variability and systematic uncertainty were all recovered. For this example the algorithm performance was stable and considerably quicker than Bayesian methods that account for the full extent of uncertainties. Future research with real-life examples is required to fully access the advantages and limitations of the proposed methodology.

Keywords: bayesian, calibration, numerical model, system identification, systematic uncertainty, Gaussian process

Procedia PDF Downloads 322
28144 Integrated Design in Additive Manufacturing Based on Design for Manufacturing

Authors: E. Asadollahi-Yazdi, J. Gardan, P. Lafon

Abstract:

Nowadays, manufactures are encountered with production of different version of products due to quality, cost and time constraints. On the other hand, Additive Manufacturing (AM) as a production method based on CAD model disrupts the design and manufacturing cycle with new parameters. To consider these issues, the researchers utilized Design For Manufacturing (DFM) approach for AM but until now there is no integrated approach for design and manufacturing of product through the AM. So, this paper aims to provide a general methodology for managing the different production issues, as well as, support the interoperability with AM process and different Product Life Cycle Management tools. The problem is that the models of System Engineering which is used for managing complex systems cannot support the product evolution and its impact on the product life cycle. Therefore, it seems necessary to provide a general methodology for managing the product’s diversities which is created by using AM. This methodology must consider manufacture and assembly during product design as early as possible in the design stage. The latest approach of DFM, as a methodology to analyze the system comprehensively, integrates manufacturing constraints in the numerical model in upstream. So, DFM for AM is used to import the characteristics of AM into the design and manufacturing process of a hybrid product to manage the criteria coming from AM. Also, the research presents an integrated design method in order to take into account the knowledge of layers manufacturing technologies. For this purpose, the interface model based on the skin and skeleton concepts is provided, the usage and manufacturing skins are used to show the functional surface of the product. Also, the material flow and link between the skins are demonstrated by usage and manufacturing skeletons. Therefore, this integrated approach is a helpful methodology for designer and manufacturer in different decisions like material and process selection as well as, evaluation of product manufacturability.

Keywords: additive manufacturing, 3D printing, design for manufacturing, integrated design, interoperability

Procedia PDF Downloads 312
28143 Advanced Approach to Analysis the Thin Strip Profile in Cold Rolling of Pair Roll Crossing and Shifting Mill Using an Arbitrary Lagrangian-Eulerian Technique

Authors: Abdulrahman Aljabri, Essam R. I. Mahmoud, Hamad Almohamedi, Zhengyi Jiang

Abstract:

Cold rolled thin strip has received intensive attention through technological and theoretical progress in the rolling process, as well as researchers have focused on its control during rolling as an essential parameter for producing thinner strip with good shape and profile. An advanced approach has been proposed to analysis the thin strip profile in cold rolling of pair roll crossing and shifting mill using Finite Element Analysis (FEA) with an ALE technique. The ALE (Arbitrary Lagrangian-Eulerian) techniques to enable more flexibility of the ALE technique in the adjustment of the finite element mesh, which provides a significant tool for simulating the thin strip under realistic rolling process constraint and provide accurate model results. The FEA can provide theoretical basis for the 3D model of controlling the strip shape and profile in thin strip rolling, and deliver an optimal rolling process parameter, and suggest corrective changes during cold rolling of thin strip.

Keywords: pair roll crossing, work roll shifting, strip shape and profile, finite element modeling

Procedia PDF Downloads 93
28142 The Effect of Porous Alkali Activated Material Composition on Buffer Capacity in Bioreactors

Authors: Girts Bumanis, Diana Bajare

Abstract:

With demand for primary energy continuously growing, search for renewable and efficient energy sources has been high on agenda of our society. One of the most promising energy sources is biogas technology. Residues coming from dairy industry and milk processing could be used in biogas production; however, low efficiency and high cost impede wide application of such technology. One of the main problems is management and conversion of organic residues through the anaerobic digestion process which is characterized by acidic environment due to the low whey pH (<6) whereas additional pH control system is required. Low buffering capacity of whey is responsible for the rapid acidification in biological treatments; therefore alkali activated material is a promising solution of this problem. Alkali activated material is formed using SiO2 and Al2O3 rich materials under highly alkaline solution. After material structure forming process is completed, free alkalis remain in the structure of materials which are available for leaching and could provide buffer capacity potential. In this research porous alkali activated material was investigated. Highly porous material structure ensures gradual leaching of alkalis during time which is important in biogas digestion process. Research of mixture composition and SiO2/Na2O and SiO2/Al2O ratio was studied to test the buffer capacity potential of alkali activated material. This research has proved that by changing molar ratio of components it is possible to obtain a material with different buffer capacity, and this novel material was seen to have considerable potential for using it in processes where buffer capacity and pH control is vitally important.

Keywords: alkaline material, buffer capacity, biogas production, bioreactors

Procedia PDF Downloads 240
28141 A Predictive Model for Turbulence Evolution and Mixing Using Machine Learning

Authors: Yuhang Wang, Jorg Schluter, Sergiy Shelyag

Abstract:

The high cost associated with high-resolution computational fluid dynamics (CFD) is one of the main challenges that inhibit the design, development, and optimisation of new combustion systems adapted for renewable fuels. In this study, we propose a physics-guided CNN-based model to predict turbulence evolution and mixing without requiring a traditional CFD solver. The model architecture is built upon U-Net and the inception module, while a physics-guided loss function is designed by introducing two additional physical constraints to allow for the conservation of both mass and pressure over the entire predicted flow fields. Then, the model is trained on the Large Eddy Simulation (LES) results of a natural turbulent mixing layer with two different Reynolds number cases (Re = 3000 and 30000). As a result, the model prediction shows an excellent agreement with the corresponding CFD solutions in terms of both spatial distributions and temporal evolution of turbulent mixing. Such promising model prediction performance opens up the possibilities of doing accurate high-resolution manifold-based combustion simulations at a low computational cost for accelerating the iterative design process of new combustion systems.

Keywords: computational fluid dynamics, turbulence, machine learning, combustion modelling

Procedia PDF Downloads 87
28140 Modelling Heat Transfer Characteristics in the Pasteurization Process of Medium Long Necked Bottled Beers

Authors: S. K. Fasogbon, O. E. Oguegbu

Abstract:

Pasteurization is one of the most important steps in the preservation of beer products, which improves its shelf life by inactivating almost all the spoilage organisms present in it. However, there is no gain saying the fact that it is always difficult to determine the slowest heating zone, the temperature profile and pasteurization units inside bottled beer during pasteurization, hence there had been significant experimental and ANSYS fluent approaches on the problem. This work now developed Computational fluid dynamics model using COMSOL Multiphysics. The model was simulated to determine the slowest heating zone, temperature profile and pasteurization units inside the bottled beer during the pasteurization process. The results of the simulation were compared with the existing data in the literature. The results showed that, the location and size of the slowest heating zone is dependent on the time-temperature combination of each zone. The results also showed that the temperature profile of the bottled beer was found to be affected by the natural convection resulting from variation in density during pasteurization process and that the pasteurization unit increases with time subject to the temperature reached by the beer. Although the results of this work agreed with literatures in the aspects of slowest heating zone and temperature profiles, the results of pasteurization unit however did not agree. It was suspected that this must have been greatly affected by the bottle geometry, specific heat capacity and density of the beer in question. The work concludes that for effective pasteurization to be achieved, there is a need to optimize the spray water temperature and the time spent by the bottled product in each of the pasteurization zones.

Keywords: modeling, heat transfer, temperature profile, pasteurization process, bottled beer

Procedia PDF Downloads 200
28139 Phase II Monitoring of First-Order Autocorrelated General Linear Profiles

Authors: Yihua Wang, Yunru Lai

Abstract:

Statistical process control has been successfully applied in a variety of industries. In some applications, the quality of a process or product is better characterized and summarized by a functional relationship between a response variable and one or more explanatory variables. A collection of this type of data is called a profile. Profile monitoring is used to understand and check the stability of this relationship or curve over time. The independent assumption for the error term is commonly used in the existing profile monitoring studies. However, in many applications, the profile data show correlations over time. Therefore, we focus on a general linear regression model with a first-order autocorrelation between profiles in this study. We propose an exponentially weighted moving average charting scheme to monitor this type of profile. The simulation study shows that our proposed methods outperform the existing schemes based on the average run length criterion.

Keywords: autocorrelation, EWMA control chart, general linear regression model, profile monitoring

Procedia PDF Downloads 457
28138 Recovery of Hydrogen Converter Efficiency Affected by Poisoning of Catalyst with Increasing of Temperature

Authors: Enayat Enayati, Reza Behtash

Abstract:

The purpose of the H2 removal system is to reduce a content of hydrogen and other combustibles in the CO2 feed owing to avoid developing a possible explosive condition in the synthesis. In order to reduce the possibility of forming an explosive gas mixture in the synthesis as much as possible, the hydrogen percent in the fresh CO2, will be removed in hydrogen converter. Therefore the partly compressed CO2/Air mixture is led through Hydrogen converter (Reactor) where the H2, present in the CO2, is reduced by catalytic combustion to values less than 50 ppm (vol). According the following exothermic chemical reaction: 2H2 + O2 → 2H2O + Heat. The catalyst in hydrogen converter consist of platinum on a aluminum oxide carrier. Low catalyst activity maybe due to catalyst poisoning. This will result in an increase of the hydrogen content in the CO2 to the synthesis. It is advised to shut down the plant when the outlet of hydrogen converter increased above 100 ppm, to prevent undesirable gas composition in the plant. Replacement of catalyst will be time exhausting and costly so as to prevent this, we increase the inlet temperature of hydrogen converter according to following Arrhenius' equation: K=K0e (-E_a/RT) K is rate constant of a chemical reaction where K0 is the pre-exponential factor, E_a is the activation energy, and R is the universal gas constant. Increment of inlet temperature of hydrogen converter caused to increase the rate constant of chemical reaction and so declining the amount of hydrogen from 125 ppm to 70 ppm.

Keywords: catalyst, converter, poisoning, temperature

Procedia PDF Downloads 814
28137 Achieving Process Stability through Automation and Process Optimization at H Blast Furnace Tata Steel, Jamshedpur

Authors: Krishnendu Mukhopadhyay, Subhashis Kundu, Mayank Tiwari, Sameeran Pani, Padmapal, Uttam Singh

Abstract:

Blast Furnace is a counter current process where burden descends from top and hot gases ascend from bottom and chemically reduce iron oxides into liquid hot metal. One of the major problems of blast furnace operation is the erratic burden descent inside furnace. Sometimes this problem is so acute that burden descent stops resulting in Hanging and instability of the furnace. This problem is very frequent in blast furnaces worldwide and results in huge production losses. This situation becomes more adverse when blast furnaces are operated at low coke rate and high coal injection rate with adverse raw materials like high alumina ore and high coke ash. For last three years, H-Blast Furnace Tata Steel was able to reduce coke rate from 450 kg/thm to 350 kg/thm with an increase in coal injection to 200 kg/thm which are close to world benchmarks and expand profitability. To sustain this regime, elimination of irregularities of blast furnace like hanging, channeling, and scaffolding is very essential. In this paper, sustaining of zero hanging spell for consecutive three years with low coke rate operation by improvement in burden characteristics, burden distribution, changes in slag regime, casting practices and adequate automation of the furnace operation has been illustrated. Models have been created to comprehend and upgrade the blast furnace process understanding. A model has been developed to predict the process of maintaining slag viscosity in desired range to attain proper burden permeability. A channeling prediction model has also been developed to understand channeling symptoms so that early actions can be initiated. The models have helped to a great extent in standardizing the control decisions of operators at H-Blast Furnace of Tata Steel, Jamshedpur and thus achieving process stability for last three years.

Keywords: hanging, channelling, blast furnace, coke

Procedia PDF Downloads 191