Search results for: Process Models.
6514 Effect of Injection Moulding Process Parameter on Tensile Strength Using Taguchi Method
Authors: Gurjeet Singh, M. K. Pradhan, Ajay Verma
Abstract:
The plastic industry plays very important role in the economy of any country. It is generally among the leading share of the economy of the country. Since metals and their alloys are very rarely available on the earth. Therefore, to produce plastic products and components, which finds application in many industrial as well as household consumer products is beneficial. Since 50% plastic products are manufactured by injection moulding process. For production of better quality product, we have to control quality characteristics and performance of the product. The process parameters plays a significant role in production of plastic, hence the control of process parameter is essential. In this paper the effect of the parameters selection on injection moulding process has been described. It is to define suitable parameters in producing plastic product. Selecting the process parameter by trial and error is neither desirable nor acceptable, as it is often tends to increase the cost and time. Hence, optimization of processing parameter of injection moulding process is essential. The experiments were designed with Taguchi’s orthogonal array to achieve the result with least number of experiments. Plastic material polypropylene is studied. Tensile strength test of material is done on universal testing machine, which is produced by injection moulding machine. By using Taguchi technique with the help of MiniTab-14 software the best value of injection pressure, melt temperature, packing pressure and packing time is obtained. We found that process parameter packing pressure contribute more in production of good tensile plastic product.
Keywords: Injection moulding, tensile strength, Taguchi method, poly-propylene.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37646513 Fast and Efficient On-Chip Interconnection Modeling for High Speed VLSI Systems
Authors: A.R. Aswatha, T. Basavaraju, S. Sandeep Kumar
Abstract:
Timing driven physical design, synthesis, and optimization tools need efficient closed-form delay models for estimating the delay associated with each net in an integrated circuit (IC) design. The total number of nets in a modern IC design has increased dramatically and exceeded millions. Therefore efficient modeling of interconnection is needed for high speed IC-s. This paper presents closed–form expressions for RC and RLC interconnection trees in current mode signaling, which can be implemented in VLSI design tool. These analytical model expressions can be used for accurate calculation of delay after the design clock tree has been laid out and the design is fully routed. Evaluation of these analytical models is several orders of magnitude faster than simulation using SPICE.Keywords: IC design, RC/RLC Interconnection, VLSI Systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15076512 Comparison of Two-Phase Critical Flow Models for Estimation of Leak Flow Rate through Cracks
Authors: Tadashi Watanabe, Jinya Katsuyama, Akihiro Mano
Abstract:
The estimation of leak flow rates through narrow cracks in structures is of importance for nuclear reactor safety, since the leak flow could be detected before occurrence of loss-of-coolant accidents. The two-phase critical leak flow rates are calculated using the system analysis code, and two representative non-homogeneous critical flow models, Henry-Fauske model and Ransom-Trapp model, are compared. The pressure decrease and vapor generation in the crack, and the leak flow rates are found to be larger for the Henry-Fauske model. It is shown that the leak flow rates are not affected by the structural temperature, but affected largely by the roughness of crack surface.
Keywords: Crack, critical flow, leak, roughness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8426511 Optimization of End Milling Process Parameters for Minimization of Surface Roughness of AISI D2 Steel
Authors: Pankaj Chandna, Dinesh Kumar
Abstract:
The present work analyses different parameters of end milling to minimize the surface roughness for AISI D2 steel. D2 Steel is generally used for stamping or forming dies, punches, forming rolls, knives, slitters, shear blades, tools, scrap choppers, tyre shredders etc. Surface roughness is one of the main indices that determines the quality of machined products and is influenced by various cutting parameters. In machining operations, achieving desired surface quality by optimization of machining parameters, is a challenging job. In case of mating components the surface roughness become more essential and is influenced by the cutting parameters, because, these quality structures are highly correlated and are expected to be influenced directly or indirectly by the direct effect of process parameters or their interactive effects (i.e. on process environment). In this work, the effects of selected process parameters on surface roughness and subsequent setting of parameters with the levels have been accomplished by Taguchi’s parameter design approach. The experiments have been performed as per the combination of levels of different process parameters suggested by L9 orthogonal array. Experimental investigation of the end milling of AISI D2 steel with carbide tool by varying feed, speed and depth of cut and the surface roughness has been measured using surface roughness tester. Analyses of variance have been performed for mean and signal-to-noise ratio to estimate the contribution of the different process parameters on the process.
Keywords: D2 Steel, Orthogonal Array, Optimization, Surface Roughness, Taguchi Methodology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27686510 Optimization of Machining Parametric Study on Electrical Discharge Machining
Authors: Rakesh Prajapati, Purvik Patel, Hardik Patel
Abstract:
Productivity and quality are two important aspects that have become great concerns in today’s competitive global market. Every production/manufacturing unit mainly focuses on these areas in relation to the process, as well as the product developed. The electrical discharge machining (EDM) process, even now it is an experience process, wherein the selected parameters are still often far from the maximum, and at the same time selecting optimization parameters is costly and time consuming. Material Removal Rate (MRR) during the process has been considered as a productivity estimate with the aim to maximize it, with an intention of minimizing surface roughness taken as most important output parameter. These two opposites in nature requirements have been simultaneously satisfied by selecting an optimal process environment (optimal parameter setting). Objective function is obtained by Regression Analysis and Analysis of Variance. Then objective function is optimized using Genetic Algorithm technique. The model is shown to be effective; MRR and Surface Roughness improved using optimized machining parameters.
Keywords: Material removal rate, TWR, OC, DOE, ANOVA, MINITAB.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8336509 Developing a Statistical Model for Electromagnetic Environment for Mobile Wireless Networks
Authors: C. Temaneh Nyah
Abstract:
The analysis of electromagnetic environment using deterministic mathematical models is characterized by the impossibility of analyzing a large number of interacting network stations with a priori unknown parameters, and this is characteristic, for example, of mobile wireless communication networks. One of the tasks of the tools used in designing, planning and optimization of mobile wireless network is to carry out simulation of electromagnetic environment based on mathematical modelling methods, including computer experiment, and to estimate its effect on radio communication devices. This paper proposes the development of a statistical model of electromagnetic environment of a mobile wireless communication network by describing the parameters and factors affecting it including the propagation channel and their statistical models.Keywords: Electromagnetic Environment, Statistical model, Wireless communication network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19176508 Investigating the Process Kinetics and Nitrogen Gas Production in Anammox Hybrid Reactor with Special Emphasis on the Role of Filter Media
Authors: Swati Tomar, Sunil Kumar Gupta
Abstract:
Anammox is a novel and promising technology that has changed the traditional concept of biological nitrogen removal. The process facilitates direct oxidation of ammonical nitrogen under anaerobic conditions with nitrite as an electron acceptor without addition of external carbon sources. The present study investigated the feasibility of Anammox Hybrid Reactor (AHR) combining the dual advantages of suspended and attached growth media for biodegradation of ammonical nitrogen in wastewater. Experimental unit consisted of 4 nos. of 5L capacity AHR inoculated with mixed seed culture containing anoxic and activated sludge (1:1). The process was established by feeding the reactors with synthetic wastewater containing NH4-H and NO2-N in the ratio 1:1 at HRT (hydraulic retention time) of 1 day. The reactors were gradually acclimated to higher ammonium concentration till it attained pseudo steady state removal at a total nitrogen concentration of 1200 mg/l. During this period, the performance of the AHR was monitored at twelve different HRTs varying from 0.25-3.0 d with increasing NLR from 0.4 to 4.8 kg N/m3d. AHR demonstrated significantly higher nitrogen removal (95.1%) at optimal HRT of 1 day. Filter media in AHR contributed an additional 27.2% ammonium removal in addition to 72% reduction in the sludge washout rate. This may be attributed to the functional mechanism of filter media which acts as a mechanical sieve and reduces the sludge washout rate many folds. This enhances the biomass retention capacity of the reactor by 25%, which is the key parameter for successful operation of high rate bioreactors. The effluent nitrate concentration, which is one of the bottlenecks of anammox process was also minimised significantly (42.3-52.3 mg/L). Process kinetics was evaluated using first order and Grau-second order models. The first-order substrate removal rate constant was found as 13.0 d-1. Model validation revealed that Grau second order model was more precise and predicted effluent nitrogen concentration with least error (1.84±10%). A new mathematical model based on mass balance was developed to predict N2 gas in AHR. The mass balance model derived from total nitrogen dictated significantly higher correlation (R2=0.986) and predicted N2 gas with least error of precision (0.12±8.49%). SEM study of biomass indicated the presence of heterogeneous population of cocci and rod shaped bacteria of average diameter varying from 1.2-1.5 mm. Owing to enhanced NRE coupled with meagre production of effluent nitrate and its ability to retain high biomass, AHR proved to be the most competitive reactor configuration for dealing with nitrogen laden wastewater.
Keywords: Anammox, filter media, kinetics, nitrogen removal.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25516507 Solving Process Planning and Scheduling with Number of Operation Plus Processing Time Due-Date Assignment Concurrently Using a Genetic Search
Authors: Halil Ibrahim Demir, Alper Goksu, Onur Canpolat, Caner Erden, Melek Nur
Abstract:
Traditionally process planning, scheduling and due date assignment are performed sequentially and separately. High interrelation between these functions makes integration very useful. Although there are numerous works on integrated process planning and scheduling and many works on scheduling with due date assignment, there are only a few works on the integration of these three functions. Here we tested the different integration levels of these three functions and found a fully integrated version as the best. We applied genetic search and random search and genetic search was found better compared to the random search. We penalized all earliness, tardiness and due date related costs. Since all these three terms are all undesired, it is better to penalize all of them.Keywords: Process planning, scheduling, due-date assignment, genetic algorithm, random search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8376506 A Survey of 2nd Year Students’ Frequent English Writing Errors and the Effects of Participatory Error Correction Process
Authors: Chaiwat Tantarangsee
Abstract:
The purposes of this study are 1) to study the effects of participatory error correction process and 2) to find out the students’ satisfaction of such error correction process. This study is a Quasi Experimental Research with single group, in which data is collected 5 times preceding and following 4 experimental studies of participatory error correction process including providing coded indirect corrective feedback in the students’ texts with error treatment activities. Samples include 52 2nd year English Major students, Faculty of Humanities and Social Sciences, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the course; Reading and Writing English for Academic Purposes II, and tools for data collection include 5 writing tests of short texts and a questionnaire. Based on formative evaluation of the students’ writing ability prior to and after each of the 4 experiments, the research findings disclose the students’ higher scores with statistical difference at 0.00. Moreover, in terms of the effect size of such process, it is found that for mean of the students’ scores prior to and after the 4 experiments; d equals 0.6801, 0.5093, 0.5071, and 0.5296 respectively. It can be concluded that participatory error correction process enables all of the students to learn equally well and there is improvement in their ability to write short texts. Finally the students’ overall satisfaction of the participatory error correction process is in high level (Mean = 4.39, S.D. = 0.76).
Keywords: Coded indirect corrective feedback, participatory error correction process, error treatment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17826505 Stage-Gate Framework Application for Innovation Assessment among Small and Medium-Sized Enterprises
Authors: Indre Brazauskaite, Vilte Auruskeviciene
Abstract:
The paper explores the Stage-Gate framework application for innovation maturity among small and medium-sized enterprises (SMEs). Innovation management becomes an essential business survival process for all sizes of organizations that can be evaluated and audited systemically. This research systemically defines and assesses the innovation process from the perspective of the company’s top management. Empirical research explores attitudes and existing practices of innovation management in SMEs in Baltic countries. It structurally investigates the current innovation management practices, level of standardization, and potential challenges in the area. Findings allow to structure of existing practices based on an institutionalized model and contribute to a more advanced understanding of the innovation process among SMEs. Practically, findings contribute to advanced decision-making and business planning in the process.
Keywords: innovation measure, innovation process, small and medium-sized enterprises, SMEs, stage-gate framework.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1666504 Monitoring Patents Using the Statistical Process Control
Authors: Stephanie Russo Fabris, Edmara Thays Neres Menezes, Ruirogeres dos Santos Cruz, Lucio Leonardo Siqueira Santos, Suzana Leitao Russo
Abstract:
The statistical process control (SPC) is one of the most powerful tools developed to assist ineffective control of quality, involves collecting, organizing and interpreting data during production. This article aims to show how the use of CEP industries can control and continuously improve product quality through monitoring of production that can detect deviations of parameters representing the process by reducing the amount of off-specification products and thus the costs of production. This study aimed to conduct a technological forecasting in order to characterize the research being done related to the CEP. The survey was conducted in the databases Spacenet, WIPO and the National Institute of Industrial Property (INPI). Among the largest are the United States depositors and deposits via PCT, the classification section that was presented in greater abundance to F.
Keywords: Statistical Process Control, Industries
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15356503 Decoupled Scheduling in Meta Environment
Authors: Ponsy R.K. Sathia Bhama, Thamarai Selvi Soma Sundaram, R. Sivakama Sundari, R. Bakiyalakshmi, K. Thamizharasi
Abstract:
Grid scheduling is the process of mapping grid jobs to resources over multiple administrative domains. Traditionally, application-level schedulers have been tightly integrated with the application itself and were not easily applied to other applications. This design is generic that decouples the scheduler core (the search procedure) from the application-specific (e.g. application performance models) and platform-specific (e.g. collection of resource information) components used by the search procedure. In this decoupled approach the application details are not revealed completely to broker, but customer will give the application to resource provider for execution. In a decoupled approach, apart from scheduling, the resource selection can be performed independently in order to achieve scalability.Keywords: Meta, grid scheduling, application-level scheduler, decouple, scheduler core and performance model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12776502 Detecting Earnings Management via Statistical and Neural Network Techniques
Authors: Mohammad Namazi, Mohammad Sadeghzadeh Maharluie
Abstract:
Predicting earnings management is vital for the capital market participants, financial analysts and managers. The aim of this research is attempting to respond to this query: Is there a significant difference between the regression model and neural networks’ models in predicting earnings management, and which one leads to a superior prediction of it? In approaching this question, a Linear Regression (LR) model was compared with two neural networks including Multi-Layer Perceptron (MLP), and Generalized Regression Neural Network (GRNN). The population of this study includes 94 listed companies in Tehran Stock Exchange (TSE) market from 2003 to 2011. After the results of all models were acquired, ANOVA was exerted to test the hypotheses. In general, the summary of statistical results showed that the precision of GRNN did not exhibit a significant difference in comparison with MLP. In addition, the mean square error of the MLP and GRNN showed a significant difference with the multi variable LR model. These findings support the notion of nonlinear behavior of the earnings management. Therefore, it is more appropriate for capital market participants to analyze earnings management based upon neural networks techniques, and not to adopt linear regression models.Keywords: Earnings management, generalized regression neural networks, linear regression, multi-layer perceptron, Tehran stock exchange.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21046501 IOT Based Process Model for Heart Monitoring Process
Authors: Dalyah Y. Al-Jamal, Maryam H. Eshtaiwi, Liyakathunisa Syed
Abstract:
Connecting health services with technology has a huge demand as people health situations are becoming worse day by day. In fact, engaging new technologies such as Internet of Things (IOT) into the medical services can enhance the patient care services. Specifically, patients suffering from chronic diseases such as cardiac patients need a special care and monitoring. In reality, some efforts were previously taken to automate and improve the patient monitoring systems. However, the previous efforts have some limitations and lack the real-time feature needed for chronic kind of diseases. In this paper, an improved process model for patient monitoring system specialized for cardiac patients is presented. A survey was distributed and interviews were conducted to gather the needed requirements to improve the cardiac patient monitoring system. Business Process Model and Notation (BPMN) language was used to model the proposed process. In fact, the proposed system uses the IOT Technology to assist doctors to remotely monitor and follow-up with their heart patients in real-time. In order to validate the effectiveness of the proposed solution, simulation analysis was performed using Bizagi Modeler tool. Analysis results show performance improvements in the heart monitoring process. For the future, authors suggest enhancing the proposed system to cover all the chronic diseases.
Keywords: Business process model and notation, cardiac patient, cardiac monitoring, heart monitoring, healthcare, internet of things, remote patient monitoring system, process model, telemedicine, wearable sensors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16756500 Ideological Framing in Television News: The Case of “Settlement Process”
Authors: Mete Kazaz, Birol Gülnar
Abstract:
Television news has gained a new dimension in terms of ideological approaches as a result of such factors as globalization, cross monopolization, presence of international companies etc. and certain strategies have been developed at the production, presentation and distribution stages of news. In this study, television news about a process called “settlement process” was investigated. In this framework, news about the settlement process on TV channels of TRT 1, ATV, FOX TV, NTV, HABERTÜRK, TRT HABER and STV was investigated using the content analysis method in terms of the strategies the ideology construction, attitude towards the party in power, attitude towards parties in opposition and attitude towards BDP (Peace and Democracy Part) and Imrali (the island where Abdullah Ocalan, head of PKK, is kept). First, the aforementioned TV channels were selected randomly from 3 groups in order to be able to reveal the representational capacity of commercial, news and public channels. The study covers 557 news items broadcast in the main news bulletins between the dates of 15 March 2013 and 15 March 2013. While there was a positive attitude towards the government in a sizable portion of the news about the settlement process (63.6%), the attitude of 25.3% of the news was impartial towards the government and 11.3% had a negative attitude. On the other hand, there was a negative attitude towards the Opposition in a considerable portion of the news about the settlement process (56.1%). The attitude of 35.9% of the news towards the Opposition was impartial whereas 8.0% had a positive attitude. While 34.9% of the news about the settlement process used the legitimization strategy from among the ideology construction strategies, 22.8% used the unification strategy, 15.7% the reification strategy, 15.6% fractional and 11% concealment/mystification strategy.
Keywords: Attitude, Ideological Framing, Television News.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17826499 Correlation of Viscosity in Nanofluids using Genetic Algorithm-neural Network (GA-NN)
Authors: Hajir Karimi, Fakheri Yousefi, Mahmood Reza Rahimi
Abstract:
An accurate and proficient artificial neural network (ANN) based genetic algorithm (GA) is developed for predicting of nanofluids viscosity. A genetic algorithm (GA) is used to optimize the neural network parameters for minimizing the error between the predictive viscosity and the experimental one. The experimental viscosity in two nanofluids Al2O3-H2O and CuO-H2O from 278.15 to 343.15 K and volume fraction up to 15% were used from literature. The result of this study reveals that GA-NN model is outperform to the conventional neural nets in predicting the viscosity of nanofluids with mean absolute relative error of 1.22% and 1.77% for Al2O3-H2O and CuO-H2O, respectively. Furthermore, the results of this work have also been compared with others models. The findings of this work demonstrate that the GA-NN model is an effective method for prediction viscosity of nanofluids and have better accuracy and simplicity compared with the others models.Keywords: genetic algorithm, nanofluids, neural network, viscosity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20836498 A User - Requirements Approach in Medical Devices Maintenance System Development: A Case Study from an Industry Perspective
Authors: Manar AlJazzazi, Mohammed Rawashdeh, Tariq Alshawaheen, Aktham Malkawi
Abstract:
This paper is a part of research, in which the way the biomedical engineers follow in their work is analyzed. The goal of this paper is to present a method for specification of user requirements in the medical devices maintenance process. Data Gathering Methods, Research Model Phases and Descriptive Analysis is presented. These technology and verification rules can be implemented in Medical devices maintenance management process to the maintenance process.Keywords: Quality Function Deployment (QFD), User - requirements approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22406497 Determinants of the U.S. Current Account
Authors: Shuh Liang
Abstract:
This article provides empirical evidence on the effect of domestic and international factors on the U.S. current account deficit. Linear dynamic regression and vector autoregression models are employed to estimate the relationships during the period from 1986 to 2011. The findings of this study suggest that the current and lagged private saving rate and foreign current account for East Asian economies have played a vital role in affecting the U.S. current account. Additionally, using Granger causality tests and variance decompositions, the change of the productivity growth and foreign domestic demand are determined to influence significantly the change of the U.S. current account. To summarize, the empirical relationship between the U.S. current account deficit and its determinants is sensitive to alternative regression models and specifications.Keywords: Current account deficit, productivity growth, foreign demand, vector autoregression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17196496 Mathematical Programming Models for Portfolio Optimization Problem: A Review
Authors: M. Mokhtar, A. Shuib, D. Mohamad
Abstract:
Portfolio optimization problem has received a lot of attention from both researchers and practitioners over the last six decades. This paper provides an overview of the current state of research in portfolio optimization with the support of mathematical programming techniques. On top of that, this paper also surveys the solution algorithms for solving portfolio optimization models classifying them according to their nature in heuristic and exact methods. To serve these purposes, 40 related articles appearing in the international journal from 2003 to 2013 have been gathered and analyzed. Based on the literature review, it has been observed that stochastic programming and goal programming constitute the highest number of mathematical programming techniques employed to tackle the portfolio optimization problem. It is hoped that the paper can meet the needs of researchers and practitioners for easy references of portfolio optimization.
Keywords: Portfolio optimization, Mathematical programming, Multi-objective programming, Solution approaches.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 65736495 Fuzzy Multi-Criteria Decision-Making Based on Ignatian Discernment Process
Authors: Pathinathan Theresanathan, Ajay Minj
Abstract:
Ignatian Discernment Process (IDP) is an intense decision-making tool to decide on life-issues. Decisions are influenced by various factors outside of the decision maker and inclination within. This paper develops IDP in the context of Fuzzy Multi-criteria Decision Making (FMCDM) process. Extended VIKOR method is a decision-making method which encompasses even conflict situations and accommodates weightage to various issues. Various aspects of IDP, namely three ways of decision making and tactics of inner desires, are observed, analyzed and articulated within the frame work of fuzzy rules. The decision-making situations are broadly categorized into two types. The issues outside of the decision maker influence the person. The inner feeling also plays vital role in coming to a conclusion. IDP integrates both the categories using Extended VIKOR method. Case studies are carried out and analyzed with FMCDM process. Finally, IDP is verified with an illustrative case study and results are interpreted. A confused person who could not come to a conclusion is able to take decision on a concrete way of life through IDP. The proposed IDP model recommends an integrated and committed approach to value-based decision making.
Keywords: Analytical hierarchy process, fuzzy multi-criteria decision making, Ignatian discernment process, Ignatian discernment, multi-criteria decision making, VIKOR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12736494 Investigation of Advanced Oxidation Process for the Removal of Residual Carbaryl from Drinking Water Resources
Authors: Ali Reza Rahmani, Mohamad Taghi Samadi, Maryam Khodadadi
Abstract:
A laboratory set-up was designed to survey the effectiveness of UV/O3 advanced oxidation process (AOP) for the removal of Carbaryl from polluted water in batch reactor. The study was carried out by UV/O3 process for water samples containing 1 to 20 mg/L of Carbaryl in distilled water. Also the range of drinking water resources adjusted in synthetic water and effects of contact time, pH and Carbaryl concentration were studied. The residual pesticide concentration was determined by applying high performance liquid chromatography (HPLC). The results indicated that increasing of retention time and pH, enhances pesticide removal efficiency. The removal efficiency has been affected by pesticide initial concentration. Samples with low pesticide concentration showed a remarkable removal efficiency compared to the samples with high pesticide concentration. AOP method showed the removal efficiencies of 80% to 100%. Although process showed high performance for removal of pesticide from water samples, this process has different disadvantages including complication, intolerability, difficulty of maintenance and equipmental and structural requirements.Keywords: AOP, Carbaryl, Pesticides, Water treatment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23706493 Unsteady Flow Simulations for Microchannel Design and Its Fabrication for Nanoparticle Synthesis
Authors: Mrinalini Amritkar, Disha Patil, Swapna Kulkarni, Sukratu Barve, Suresh Gosavi
Abstract:
Micro-mixers play an important role in the lab-on-a-chip applications and micro total analysis systems to acquire the correct level of mixing for any given process. The mixing process can be classified as active or passive according to the use of external energy. Literature of microfluidics reports that most of the work is done on the models of steady laminar flow; however, the study of unsteady laminar flow is an active area of research at present. There are wide applications of this, out of which, we consider nanoparticle synthesis in micro-mixers. In this work, we have developed a model for unsteady flow to study the mixing performance of a passive micro mixer for reactants used for such synthesis. The model is developed in Finite Volume Method (FVM)-based software, OpenFOAM. The model is tested by carrying out the simulations at Re of 0.5. Mixing performance of the micro-mixer is investigated using simulated concentration values of mixed species across the width of the micro-mixer and calculating the variance across a line profile. Experimental validation is done by passing dyes through a Y shape micro-mixer fabricated using polydimethylsiloxane (PDMS) polymer and comparing variances with the simulated ones. Gold nanoparticles are later synthesized through the micro-mixer and collected at two different times leading to significantly different size distributions. These times match with the time scales over which reactant concentrations vary as obtained from simulations. Our simulations could thus be used to create design aids for passive micro-mixers used in nanoparticle synthesis.
Keywords: Lab-on-chip, micro-mixer, OpenFOAM, PDMS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7896492 Waste Oils pre-Esterification for Biodiesel Synthesis: Effect of Feed Moisture Contents
Authors: Kalala Jalama
Abstract:
A process flowsheet was developed in ChemCad 6.4 to study the effect of feed moisture contents on the pre-esterification of waste oils. Waste oils were modelled as a mixture of triolein (90%), oleic acid (5%) and water (5%). The process mainly consisted of feed drying, pre-esterification reaction and methanol recovery. The results showed that the process energy requirements would be minimized when higher degrees of feed drying and higher preesterification reaction temperatures are used.Keywords: Waste oils, moisture content, pre-esterification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17556491 Using Ferry Access Points to Improve the Performance of Message Ferrying in Delay-Tolerant Networks
Authors: Farzana Yasmeen, Md. Nurul Huda, Md. Enamul Haque, Michihiro Aoki, Shigeki Yamada
Abstract:
Delay-Tolerant Networks (DTNs) are sparse, wireless networks where disconnections are common due to host mobility and low node density. The Message Ferrying (MF) scheme is a mobilityassisted paradigm to improve connectivity in DTN-like networks. A ferry or message ferry is a special node in the network which has a per-determined route in the deployed area and relays messages between mobile hosts (MHs) which are intermittently connected. Increased contact opportunities among mobile hosts and the ferry improve the performance of the network, both in terms of message delivery ratio and average end-end delay. However, due to the inherent mobility of mobile hosts and pre-determined periodicity of the message ferry, mobile hosts may often -miss- contact opportunities with a ferry. In this paper, we propose the combination of stationary ferry access points (FAPs) with MF routing to increase contact opportunities between mobile hosts and the MF and consequently improve the performance of the DTN. We also propose several placement models for deploying FAPs on MF routes. We evaluate the performance of the FAP placement models through comprehensive simulation. Our findings show that FAPs do improve the performance of MF-assisted DTNs and symmetric placement of FAPs outperforms other placement strategies.Keywords: Service infrastructure, delay-tolerant network, messageferry routing, placement models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19796490 A Review of Existing Turnover Intention Theories
Authors: Pauline E. Ngo-Henha
Abstract:
Existing turnover intention theories are reviewed in this paper. This review was conducted with the help of the search keyword “turnover intention theories” in Google Scholar during the month of July 2017. These theories include: The Theory of Organizational Equilibrium (TOE), Social Exchange Theory, Job Embeddedness Theory, Herzberg’s Two-Factor Theory, the Resource-Based View, Equity Theory, Human Capital Theory, and the Expectancy Theory. One of the limitations of this review paper is that data were only collected from Google Scholar where many papers were sometimes not freely accessible. However, this paper attempts to contribute to the research in clarifying the distinction between theories and models in the context of turnover intention.
Keywords: Job embeddedness theory, theory of organizational equilibrium (TOE), Herzberg’s two-factor theory, turnover intention theories, theories and models.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 227796489 Prediction of Bath Temperature Using Neural Networks
Authors: H. Meradi, S. Bouhouche, M. Lahreche
Abstract:
In this work, we consider an application of neural networks in LD converter. Application of this approach assumes a reliable prediction of steel temperature and reduces a reblow ratio in steel work. It has been applied a conventional model to charge calculation, the obtained results by this technique are not always good, this is due to the process complexity. Difficulties are mainly generated by the noisy measurement and the process non linearities. Artificial Neural Networks (ANNs) have become a powerful tool for these complex applications. It is used a backpropagation algorithm to learn the neural nets. (ANNs) is used to predict the steel bath temperature in oxygen converter process for the end condition. This model has 11 inputs process variables and one output. The model was tested in steel work, the obtained results by neural approach are better than the conventional model.
Keywords: LD converter, bath temperature, neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18376488 Graphical Programming of Programmable Logic Controllers -Case Study for a Punching Machine-
Authors: Vasile Marinescu, Ionut Clementin Constantin, Alexandru Epureanu, Virgil Teodor
Abstract:
The Programmable Logic Controller (PLC) plays a vital role in automation and process control. Grafcet is used for representing the control logic, and traditional programming languages are used for describing the pure algorithms. Grafcet is used for dividing the process to be automated in elementary sequences that can be easily implemented. Each sequence represent a step that has associated actions programmed using textual or graphical languages after case. The programming task is simplified by using a set of subroutines that are used in several steps. The paper presents an example of implementation for a punching machine for sheets and plates. The use the graphical languages the programming of a complex sequential process is a necessary solution. The state of Grafcet can be used for debugging and malfunction determination. The use of the method combined with a set of knowledge acquisition for process application reduces the downtime of the machine and improve the productivity.Keywords: Grafcet, Petrinet, PLC, punching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21176487 Modelling of Hydric Behaviour of Textiles
Authors: A. Marolleau, F. Salaun, D. Dupont, H. Gidik, S. Ducept.
Abstract:
The goal of this study is to analyze the hydric behaviour of textiles which can impact significantly the comfort of the wearer. Indeed, fabrics can be adapted for different climate if hydric and thermal behaviors are known. In this study, fabrics are only submitted to hydric variations. Sorption and desorption isotherms obtained from the dynamic vapour sorption apparatus (DVS) are fitted with the parallel exponential kinetics (PEK), the Hailwood-Horrobin (HH) and the Brunauer-Emmett-Teller (BET) models. One of the major finding is the relationship existing between PEK and HH models. During slow and fast processes, the sorption of water molecules on the polymer can be in monolayer and multilayer form. According to the BET model, moisture regain, a physical property of textiles, show a linear correlation with the total amount of water taken in monolayer. This study provides potential information of the end uses of these fabrics according to the selected activity level.
Keywords: Comfort, hydric properties, modelling, underwear.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7496486 An Evaluation of Digital Elevation Models to Short-Term Monitoring of a High Energy Barrier Island, Northeast Brazil
Authors: Venerando E. Amaro, Francisco Gabriel F. de Lima, Marcelo S.T. Santos
Abstract:
The morphological short-term evolution of Ponta do Tubarão Island (PTI) was investigated through high accurate surveys based on post-processed kinematic (PPK) relative positioning on Global Navigation Satellite Systems (GNSS). PTI is part of a barrier island system on a high energy northeast Brazilian coastal environment and also an area of high environmental sensitivity. Surveys were carried out quarterly over a two years period from May 2010 to May 2012. This paper assesses statically the performance of digital elevation models (DEM) derived from different interpolation methods to represent morphologic features and to quantify volumetric changes and TIN models shown the best results to that purposes. The MDE allowed quantifying surfaces and volumes in detail as well as identifying the most vulnerable segments of the PTI to erosion and/or accumulation of sediments and relate the alterations to climate conditions. The coastal setting and geometry of PTI protects a significant mangrove ecosystem and some oil and gas facilities installed in the vicinities from damaging effects of strong oceanwaves and currents. Thus, the maintenance of PTI is extremely required but the prediction of its longevity is uncertain because results indicate an irregularity of sedimentary balance and a substantial decline in sediment supply to this coastal area.
Keywords: DEM, GNSS, short-term monitoring, Brazil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26286485 Experimental Correlation for Erythrocyte Aggregation Rate in Population Balance Modeling
Authors: Erfan Niazi, Marianne Fenech
Abstract:
Red Blood Cells (RBCs) or erythrocytes tend to form chain-like aggregates under low shear rate called rouleaux. This is a reversible process and rouleaux disaggregate in high shear rates. Therefore, RBCs aggregation occurs in the microcirculation where low shear rates are present but does not occur under normal physiological conditions in large arteries. Numerical modeling of RBCs interactions is fundamental in analytical models of a blood flow in microcirculation. Population Balance Modeling (PBM) is particularly useful for studying problems where particles agglomerate and break in a two phase flow systems to find flow characteristics. In this method, the elementary particles lose their individual identity due to continuous destructions and recreations by break-up and agglomeration. The aim of this study is to find RBCs aggregation in a dynamic situation. Simplified PBM was used previously to find the aggregation rate on a static observation of the RBCs aggregation in a drop of blood under the microscope. To find aggregation rate in a dynamic situation we propose an experimental set up testing RBCs sedimentation. In this test, RBCs interact and aggregate to form rouleaux. In this configuration, disaggregation can be neglected due to low shear stress. A high-speed camera is used to acquire video-microscopic pictures of the process. The sizes of the aggregates and velocity of sedimentation are extracted using an image processing techniques. Based on the data collection from 5 healthy human blood samples, the aggregation rate was estimated as 2.7x103(±0.3 x103) 1/s.
Keywords: Red blood cell, Rouleaux, microfluidics, image processing, population balance modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1058