Search results for: new process model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 28452

Search results for: new process model

27012 Reduction of Rotor-Bearing-Support Finite Element Model through Substructuring

Authors: Abdur Rosyid, Mohamed El-Madany, Mohanad Alata

Abstract:

Due to simplicity and low cost, rotordynamic system is often modeled by using lumped parameters. Recently, finite elements have been used to model rotordynamic system as it offers higher accuracy. However, it involves high degrees of freedom. In some applications such as control design, this requires higher cost. For this reason, various model reduction methods have been proposed. This work demonstrates the quality of model reduction of rotor-bearing-support system through substructuring. The quality of the model reduction is evaluated by comparing some first natural frequencies, modal damping ratio, critical speeds and response of both the full system and the reduced system. The simulation shows that the substructuring is proven adequate to reduce finite element rotor model in the frequency range of interest as long as the numbers and the locations of master nodes are determined appropriately. However, the reduction is less accurate in an unstable or nearly-unstable system.

Keywords: rotordynamic, finite element model, timoshenko beam, 3D solid elements, Guyan reduction method

Procedia PDF Downloads 270
27011 A Unified Model for Orotidine Monophosphate Synthesis: Target for Inhibition of Growth of Mycobacterium tuberculosis

Authors: N. Naga Subrahmanyeswara Rao, Parag Arvind Deshpande

Abstract:

Understanding nucleotide synthesis reaction of any organism is beneficial to know the growth of it as in Mycobacterium tuberculosis to design anti TB drug. One of the reactions of de novo pathway which takes place in all organisms was considered. The reaction takes places between phosphoribosyl pyrophosphate and orotate catalyzed by orotate phosphoribosyl transferase and divalent metal ion gives orotdine monophosphate, a nucleotide. All the reaction steps of three experimentally proposed mechanisms for this reaction were considered to develop kinetic rate expression. The model was validated using the data for four organisms. This model could successfully describe the kinetics for the reported data. The developed model can serve as a reliable model to describe the kinetics in new organisms without the need of mechanistic determination. So an organism-independent model was developed.

Keywords: mechanism, nucleotide, organism, tuberculosis

Procedia PDF Downloads 333
27010 Energy Efficiency Analysis of Crossover Technologies in Industrial Applications

Authors: W. Schellong

Abstract:

Industry accounts for one-third of global final energy demand. Crossover technologies (e.g. motors, pumps, process heat, and air conditioning) play an important role in improving energy efficiency. These technologies are used in many applications independent of the production branch. Especially electrical power is used by drives, pumps, compressors, and lightning. The paper demonstrates the algorithm of the energy analysis by some selected case studies for typical industrial processes. The energy analysis represents an essential part of energy management systems (EMS). Generally, process control system (PCS) can support EMS. They provide information about the production process, and they organize the maintenance actions. Combining these tools into an integrated process allows the development of an energy critical equipment strategy. Thus, asset and energy management can use the same common data to improve the energy efficiency.

Keywords: crossover technologies, data management, energy analysis, energy efficiency, process control

Procedia PDF Downloads 209
27009 Controlling the Expense of Political Contests Using a Modified N-Players Tullock’s Model

Authors: C. Cohen, O. Levi

Abstract:

This work introduces a generalization of the classical Tullock’s model of one-stage contests under complete information with multiple unlimited numbers of contestants. In classical Tullock’s model, the contest winner is not necessarily the highest bidder. Instead, the winner is determined according to a draw in which the winning probabilities are the relative contestants’ efforts. The Tullock modeling fits well political contests, in which the winner is not necessarily the highest effort contestant. This work presents a modified model which uses a simple non-discriminating rule, namely, a parameter to influence the total costs planned for an election, for example, the contest designer can control the contestants' efforts. The winner pays a fee, and the losers are reimbursed the same amount. Our proposed model includes a mechanism that controls the efforts exerted and balances competition, creating a tighter, less predictable and more interesting contest. Additionally, the proposed model follows the fairness criterion in the sense that it does not alter the contestants' probabilities of winning compared to the classic Tullock’s model. We provide an analytic solution for the contestant's optimal effort and expected reward.

Keywords: contests, Tullock's model, political elections, control expenses

Procedia PDF Downloads 143
27008 Adsorption and Selective Determination Ametryne in Food Sample Using of Magnetically Separable Molecular Imprinted Polymers

Authors: Sajjad Hussain, Sabir Khan, Maria Del Pilar Taboada Sotomayor

Abstract:

This work demonstrates the synthesis of magnetic molecularly imprinted polymers (MMIPs) for determination of a selected pesticide (ametryne) using high performance liquid chromatography (HPLC). Computational simulation can assist the choice of the most suitable monomer for the synthesis of polymers. The (MMIPs) were polymerized at the surface of Fe3O4@SiO2 magnetic nanoparticles (MNPs) using 2-vinylpyradine as functional monomer, ethylene-glycol-dimethacrylate (EGDMA) is a cross-linking agent and 2,2-Azobisisobutyronitrile (AIBN) used as radical initiator. Magnetic non-molecularly imprinted polymer (MNIPs) was also prepared under the same conditions without analyte. The MMIPs were characterized by scanning electron microscopy (SEM), Brunauer, Emmett and Teller (BET) and Fourier transform infrared spectroscopy (FTIR). Pseudo first order and pseudo second order model were applied to study kinetics of adsorption and it was found that adsorption process followed the pseudo first order kinetic model. Adsorption equilibrium data was fitted to Freundlich and Langmuir isotherms and the sorption equilibrium process was well described by Langmuir isotherm mode. The selectivity coefficients (α) of MMIPs for ametryne with respect to atrazine, ciprofloxacin and folic acid were 4.28, 12.32, and 14.53 respectively. The spiked recoveries ranged between 91.33 and 106.80% were obtained. The results showed high affinity and selectivity of MMIPs for pesticide ametryne in the food samples.

Keywords: molecularly imprinted polymer, pesticides, magnetic nanoparticles, adsorption

Procedia PDF Downloads 484
27007 Fama French Four Factor Model: A Study of Nifty Fifty Companies

Authors: Deeksha Arora

Abstract:

The study aims to explore the applicability of the widely used asset pricing models, namely, Capital Asset Pricing Model (CAPM) and the Fama-French Four Factor Model in the Indian equity market. The study will be based on the companies that form part of the Nifty Fifty Index for a period of five years: 2011 to 2016. The asset pricing model is examined by forming portfolios on the basis of three variables – market capitalization (size effect), book-to-market equity ratio (value effect) and profitability. The study provides a basis to test the presence of the Fama-French Four factor model in Indian stock market. This study may provide a basis for future research in the generalized asset pricing model comprising of multiple risk factors.

Keywords: book to market equity, Fama French four factor model, market capitalization, profitability, size effect, value effect

Procedia PDF Downloads 261
27006 The Effectiveness of a Hybrid Diffie-Hellman-RSA-Advanced Encryption Standard Model

Authors: Abdellahi Cheikh

Abstract:

With the emergence of quantum computers with very powerful capabilities, the security of the exchange of shared keys between two interlocutors poses a big problem in terms of the rapid development of technologies such as computing power and computing speed. Therefore, the Diffie-Hellmann (DH) algorithm is more vulnerable than ever. No mechanism guarantees the security of the key exchange, so if an intermediary manages to intercept it, it is easy to intercept. In this regard, several studies have been conducted to improve the security of key exchange between two interlocutors, which has led to interesting results. The modification made on our model Diffie-Hellman-RSA-AES (DRA), which encrypts the information exchanged between two users using the three-encryption algorithms DH, RSA and AES, by using stenographic photos to hide the contents of the p, g and ClesAES values that are sent in an unencrypted state at the level of DRA model to calculate each user's public key. This work includes a comparative study between the DRA model and all existing solutions, as well as the modification made to this model, with an emphasis on the aspect of reliability in terms of security. This study presents a simulation to demonstrate the effectiveness of the modification made to the DRA model. The obtained results show that our model has a security advantage over the existing solution, so we made these changes to reinforce the security of the DRA model.

Keywords: Diffie-Hellmann, DRA, RSA, advanced encryption standard

Procedia PDF Downloads 93
27005 Effects of Residence Time on Selective Absorption of Hydrogen Suphide

Authors: Dara Satyadileep, Abdallah S. Berrouk

Abstract:

Selective absorption of Hydrogen Sulphide (H2S) using methyldiethanol amine (MDEA) has become a point of interest as means of minimizing capital and operating costs of gas sweetening plants. This paper discusses the prominence of optimum design of column internals to best achieve H2S selectivity using MDEA. To this end, a kinetics-based process simulation model has been developed for a commercial gas sweetening unit. Trends of sweet gas H2S & CO2 contents as function of fraction active area (and hence residence time) have been explained through analysis of interdependent heat and mass transfer phenomena. Guidelines for column internals design in order to achieve desired degree of H2S selectivity are provided. Also the effectiveness of various operating conditions in achieving H2S selectivity for an industrial absorber with fixed internals is investigated.

Keywords: gas sweetening, H2S selectivity, methyldiethanol amine, process simulation, residence time

Procedia PDF Downloads 340
27004 Parameter Estimation for the Oral Minimal Model and Parameter Distinctions Between Obese and Non-obese Type 2 Diabetes

Authors: Manoja Rajalakshmi Aravindakshana, Devleena Ghosha, Chittaranjan Mandala, K. V. Venkateshb, Jit Sarkarc, Partha Chakrabartic, Sujay K. Maity

Abstract:

Oral Glucose Tolerance Test (OGTT) is the primary test used to diagnose type 2 diabetes mellitus (T2DM) in a clinical setting. Analysis of OGTT data using the Oral Minimal Model (OMM) along with the rate of appearance of ingested glucose (Ra) is performed to study differences in model parameters for control and T2DM groups. The differentiation of parameters of the model gives insight into the behaviour and physiology of T2DM. The model is also studied to find parameter differences among obese and non-obese T2DM subjects and the sensitive parameters were co-related to the known physiological findings. Sensitivity analysis is performed to understand changes in parameter values with model output and to support the findings, appropriate statistical tests are done. This seems to be the first preliminary application of the OMM with obesity as a distinguishing factor in understanding T2DM from estimated parameters of insulin-glucose model and relating the statistical differences in parameters to diabetes pathophysiology.

Keywords: oral minimal model, OGTT, obese and non-obese T2DM, mathematical modeling, parameter estimation

Procedia PDF Downloads 89
27003 A Review of the Run to Run (R to R) Control in the Manufacturing Processes

Authors: Khalil Aghapouramin, Mostafa Ranjbar

Abstract:

Run- to- Run (R2 R) control was developed in order to monitor and control different semiconductor manufacturing processes based upon the fundamental engineering frameworks. This technology allows rectification in the optimum direction. This control always had a significant potency in which was appeared in a variety of processes. The term run to run refers to the case where the act of control would take with the aim of getting batches of silicon wafers which produced in a manufacturing process. In the present work, a brief review about run-to-run control investigated which mainly is effective in the manufacturing process.

Keywords: Run-to-Run (R2R) control, manufacturing, process in engineering, manufacturing controls

Procedia PDF Downloads 491
27002 Numerical Modelling of Wind Dispersal Seeds of Bromeliad Tillandsia recurvata L. (L.) Attached to Electric Power Lines

Authors: Bruna P. De Souza, Ricardo C. De Almeida

Abstract:

In some cities in the State of Parana – Brazil and in other countries atmospheric bromeliads (Tillandsia spp - Bromeliaceae) are considered weeds in trees, electric power lines, satellite dishes and other artificial supports. In this study, a numerical model was developed to simulate the seed dispersal of the Tillandsia recurvata species by wind with the objective of evaluating seeds displacement in the city of Ponta Grossa – PR, Brazil, since it is considered that the region is already infested. The model simulates the dispersal of each individual seed integrating parameters from the atmospheric boundary layer (ABL) and the local wind, simulated by the Weather Research Forecasting (WRF) mesoscale atmospheric model for the 2012 to 2015 period. The dispersal model also incorporates the approximate number of bromeliads and source height data collected from most infested electric power lines. The seeds terminal velocity, which is an important input data but was not available in the literature, was measured by an experiment with fifty-one seeds of Tillandsia recurvata. Wind is the main dispersal agent acting on plumed seeds whereas atmospheric turbulence is a determinant factor to transport the seeds to distances beyond 200 meters as well as to introduce random variability in the seed dispersal process. Such variability was added to the model through the application of an Inverse Fast Fourier Transform to wind velocity components energy spectra based on boundary-layer meteorology theory and estimated from micrometeorological parameters produced by the WRF model. Seasonal and annual wind means were obtained from the surface wind data simulated by WRF for Ponta Grossa. The mean wind direction is assumed to be the most probable direction of bromeliad seed trajectory. Moreover, the atmospheric turbulence effect and dispersal distances were analyzed in order to identify likely regions of infestation around Ponta Grossa urban area. It is important to mention that this model could be applied to any species and local as long as seed’s biological data and meteorological data for the region of interest are available.

Keywords: atmospheric turbulence, bromeliad, numerical model, seed dispersal, terminal velocity, wind

Procedia PDF Downloads 140
27001 A Survey on Quasi-Likelihood Estimation Approaches for Longitudinal Set-ups

Authors: Naushad Mamode Khan

Abstract:

The Com-Poisson (CMP) model is one of the most popular discrete generalized linear models (GLMS) that handles both equi-, over- and under-dispersed data. In longitudinal context, an integer-valued autoregressive (INAR(1)) process that incorporates covariate specification has been developed to model longitudinal CMP counts. However, the joint likelihood CMP function is difficult to specify and thus restricts the likelihood based estimating methodology. The joint generalized quasilikelihood approach (GQL-I) was instead considered but is rather computationally intensive and may not even estimate the regression effects due to a complex and frequently ill conditioned covariance structure. This paper proposes a new GQL approach for estimating the regression parameters (GQLIII) that are based on a single score vector representation. The performance of GQL-III is compared with GQL-I and separate marginal GQLs (GQL-II) through some simulation experiments and is proved to yield equally efficient estimates as GQL-I and is far more computationally stable.

Keywords: longitudinal, com-Poisson, ill-conditioned, INAR(1), GLMS, GQL

Procedia PDF Downloads 353
27000 Model Order Reduction Using Hybrid Genetic Algorithm and Simulated Annealing

Authors: Khaled Salah

Abstract:

Model order reduction has been one of the most challenging topics in the past years. In this paper, a hybrid solution of genetic algorithm (GA) and simulated annealing algorithm (SA) are used to approximate high-order transfer functions (TFs) to lower-order TFs. In this approach, hybrid algorithm is applied to model order reduction putting in consideration improving accuracy and preserving the properties of the original model which are two important issues for improving the performance of simulation and computation and maintaining the behavior of the original complex models being reduced. Compared to conventional mathematical methods that have been used to obtain a reduced order model of high order complex models, our proposed method provides better results in terms of reducing run-time. Thus, the proposed technique could be used in electronic design automation (EDA) tools.

Keywords: genetic algorithm, simulated annealing, model reduction, transfer function

Procedia PDF Downloads 142
26999 Modeling Driving Distraction Considering Psychological-Physical Constraints

Authors: Yixin Zhu, Lishengsa Yue, Jian Sun, Lanyue Tang

Abstract:

Modeling driving distraction in microscopic traffic simulation is crucial for enhancing simulation accuracy. Current driving distraction models are mainly derived from physical motion constraints under distracted states, in which distraction-related error terms are added to existing microscopic driver models. However, the model accuracy is not very satisfying, due to a lack of modeling the cognitive mechanism underlying the distraction. This study models driving distraction based on the Queueing Network Human Processor model (QN-MHP). This study utilizes the queuing structure of the model to perform task invocation and switching for distracted operation and control of the vehicle under driver distraction. Based on the assumption of the QN-MHP model about the cognitive sub-network, server F is a structural bottleneck. The latter information must wait for the previous information to leave server F before it can be processed in server F. Therefore, the waiting time for task switching needs to be calculated. Since the QN-MHP model has different information processing paths for auditory information and visual information, this study divides driving distraction into two types: auditory distraction and visual distraction. For visual distraction, both the visual distraction task and the driving task need to go through the visual perception sub-network, and the stimuli of the two are asynchronous, which is called stimulus on asynchrony (SOA), so when calculating the waiting time for switching tasks, it is necessary to consider it. In the case of auditory distraction, the auditory distraction task and the driving task do not need to compete for the server resources of the perceptual sub-network, and their stimuli can be synchronized without considering the time difference in receiving the stimuli. According to the Theory of Planned Behavior for drivers (TPB), this study uses risk entropy as the decision criterion for driver task switching. A logistic regression model is used with risk entropy as the independent variable to determine whether the driver performs a distraction task, to explain the relationship between perceived risk and distraction. Furthermore, to model a driver’s perception characteristics, a neurophysiological model of visual distraction tasks is incorporated into the QN-MHP, and executes the classical Intelligent Driver Model. The proposed driving distraction model integrates the psychological cognitive process of a driver with the physical motion characteristics, resulting in both high accuracy and interpretability. This paper uses 773 segments of distracted car-following in Shanghai Naturalistic Driving Study data (SH-NDS) to classify the patterns of distracted behavior on different road facilities and obtains three types of distraction patterns: numbness, delay, and aggressiveness. The model was calibrated and verified by simulation. The results indicate that the model can effectively simulate the distracted car-following behavior of different patterns on various roadway facilities, and its performance is better than the traditional IDM model with distraction-related error terms. The proposed model overcomes the limitations of physical-constraints-based models in replicating dangerous driving behaviors, and internal characteristics of an individual. Moreover, the model is demonstrated to effectively generate more dangerous distracted driving scenarios, which can be used to construct high-value automated driving test scenarios.

Keywords: computational cognitive model, driving distraction, microscopic traffic simulation, psychological-physical constraints

Procedia PDF Downloads 90
26998 Nonlinear Aerodynamic Parameter Estimation of a Supersonic Air to Air Missile by Using Artificial Neural Networks

Authors: Tugba Bayoglu

Abstract:

Aerodynamic parameter estimation is very crucial in missile design phase, since accurate high fidelity aerodynamic model is required for designing high performance and robust control system, developing high fidelity flight simulations and verification of computational and wind tunnel test results. However, in literature, there is not enough missile aerodynamic parameter identification study for three main reasons: (1) most air to air missiles cannot fly with constant speed, (2) missile flight test number and flight duration are much less than that of fixed wing aircraft, (3) variation of the missile aerodynamic parameters with respect to Mach number is higher than that of fixed wing aircraft. In addition to these challenges, identification of aerodynamic parameters for high wind angles by using classical estimation techniques brings another difficulty in the estimation process. The reason for this, most of the estimation techniques require employing polynomials or splines to model the behavior of the aerodynamics. However, for the missiles with a large variation of aerodynamic parameters with respect to flight variables, the order of the proposed model increases, which brings computational burden and complexity. Therefore, in this study, it is aimed to solve nonlinear aerodynamic parameter identification problem for a supersonic air to air missile by using Artificial Neural Networks. The method proposed will be tested by using simulated data which will be generated with a six degree of freedom missile model, involving a nonlinear aerodynamic database. The data will be corrupted by adding noise to the measurement model. Then, by using the flight variables and measurements, the parameters will be estimated. Finally, the prediction accuracy will be investigated.

Keywords: air to air missile, artificial neural networks, open loop simulation, parameter identification

Procedia PDF Downloads 278
26997 Towards an Enhanced Compartmental Model for Profiling Malware Dynamics

Authors: Jessemyn Modiini, Timothy Lynar, Elena Sitnikova

Abstract:

We present a novel enhanced compartmental model for malware spread analysis in cyber security. This paper applies cyber security data features to epidemiological compartmental models to model the infectious potential of malware. Compartmental models are most efficient for calculating the infectious potential of a disease. In this paper, we discuss and profile epidemiologically relevant data features from a Domain Name System (DNS) dataset. We then apply these features to epidemiological compartmental models to network traffic features. This paper demonstrates how epidemiological principles can be applied to the novel analysis of key cybersecurity behaviours and trends and provides insight into threat modelling above that of kill-chain analysis. In applying deterministic compartmental models to a cyber security use case, the authors analyse the deficiencies and provide an enhanced stochastic model for cyber epidemiology. This enhanced compartmental model (SUEICRN model) is contrasted with the traditional SEIR model to demonstrate its efficacy.

Keywords: cybersecurity, epidemiology, cyber epidemiology, malware

Procedia PDF Downloads 106
26996 Development of a Context Specific Planning Model for Achieving a Sustainable Urban City

Authors: Jothilakshmy Nagammal

Abstract:

This research paper deals with the different case studies, where the Form-Based Codes are adopted in general and the different implementation methods in particular are discussed to develop a method for formulating a new planning model. The organizing principle of the Form-Based Codes, the transect is used to zone the city into various context specific transects. An approach is adopted to develop the new planning model, city Specific Planning Model (CSPM), as a tool to achieve sustainability for any city in general. A case study comparison method in terms of the planning tools used, the code process adopted and the various control regulations implemented in thirty two different cities are done. The analysis shows that there are a variety of ways to implement form-based zoning concepts: Specific plans, a parallel or optional form-based code, transect-based code /smart code, required form-based standards or design guidelines. The case studies describe the positive and negative results from based zoning, Where it is implemented. From the different case studies on the method of the FBC, it is understood that the scale for formulating the Form-Based Code varies from parts of the city to the whole city. The regulating plan is prepared with the organizing principle as the transect in most of the cases. The various implementation methods adopted in these case studies for the formulation of Form-Based Codes are special districts like the Transit Oriented Development (TOD), traditional Neighbourhood Development (TND), specific plan and Street based. The implementation methods vary from mandatory, integrated and floating. To attain sustainability the research takes the approach of developing a regulating plan, using the transect as the organizing principle for the entire area of the city in general in formulating the Form-Based Codes for the selected Special Districts in the study area in specific, street based. Planning is most powerful when it is embedded in the broader context of systemic change and improvement. Systemic is best thought of as holistic, contextualized and stake holder-owned, While systematic can be thought of more as linear, generalisable, and typically top-down or expert driven. The systemic approach is a process that is based on the system theory and system design principles, which are too often ill understood by the general population and policy makers. The system theory embraces the importance of a global perspective, multiple components, interdependencies and interconnections in any system. In addition, the recognition that a change in one part of a system necessarily alters the rest of the system is a cornerstone of the system theory. The proposed regulating plan taking the transect as an organizing principle and Form-Based Codes to achieve sustainability of the city has to be a hybrid code, which is to be integrated within the existing system - A Systemic Approach with a Systematic Process. This approach of introducing a few form based zones into a conventional code could be effective in the phased replacement of an existing code. It could also be an effective way of responding to the near-term pressure of physical change in “sensitive” areas of the community. With this approach and method the new Context Specific Planning Model is created towards achieving sustainability is explained in detail this research paper.

Keywords: context based planning model, form based code, transect, systemic approach

Procedia PDF Downloads 334
26995 A Further Insight to Foaming in Anaerobic Digester

Authors: Ifeyinwa Rita Kanu, Thomas Aspray, Adebayo J. Adeloye

Abstract:

As a result of the ambiguity and complexity surrounding anaerobic digester foaming, efforts have been made by various researchers to understand the process of anaerobic digester foaming so as to proffer a solution that can be universally applied rather than site specific. All attempts ranging from experimental analysis to comparative review of other process has been futile at explaining explicitly the conditions and process of foaming in anaerobic digester. Studying the available knowledge on foam formation and relating it to anaerobic digester process and operating condition, this study presents a succinct and enhanced understanding of foaming in anaerobic digesters as well as introducing a simple and novel method to identify the onset of anaerobic digester foaming based on analysis of historical data from a field scale system.

Keywords: anaerobic digester, foaming, biogas, surfactant, wastewater

Procedia PDF Downloads 443
26994 A Bayesian Approach for Analyzing Academic Article Structure

Authors: Jia-Lien Hsu, Chiung-Wen Chang

Abstract:

Research articles may follow a simple and succinct structure of organizational patterns, called move. For example, considering extended abstracts, we observe that an extended abstract usually consists of five moves, including Background, Aim, Method, Results, and Conclusion. As another example, when publishing articles in PubMed, authors are encouraged to provide a structured abstract, which is an abstract with distinct and labeled sections (e.g., Introduction, Methods, Results, Discussions) for rapid comprehension. This paper introduces a method for computational analysis of move structures (i.e., Background-Purpose-Method-Result-Conclusion) in abstracts and introductions of research documents, instead of manually time-consuming and labor-intensive analysis process. In our approach, sentences in a given abstract and introduction are automatically analyzed and labeled with a specific move (i.e., B-P-M-R-C in this paper) to reveal various rhetorical status. As a result, it is expected that the automatic analytical tool for move structures will facilitate non-native speakers or novice writers to be aware of appropriate move structures and internalize relevant knowledge to improve their writing. In this paper, we propose a Bayesian approach to determine move tags for research articles. The approach consists of two phases, training phase and testing phase. In the training phase, we build a Bayesian model based on a couple of given initial patterns and the corpus, a subset of CiteSeerX. In the beginning, the priori probability of Bayesian model solely relies on initial patterns. Subsequently, with respect to the corpus, we process each document one by one: extract features, determine tags, and update the Bayesian model iteratively. In the testing phase, we compare our results with tags which are manually assigned by the experts. In our experiments, the promising accuracy of the proposed approach reaches 56%.

Keywords: academic English writing, assisted writing, move tag analysis, Bayesian approach

Procedia PDF Downloads 330
26993 Optimization of Pressure in Deep Drawing Process

Authors: Ajay Kumar Choubey, Geeta Agnihotri, C. Sasikumar, Rashmi Dwivedi

Abstract:

Deep-drawing operations are performed widely in industrial applications. It is very important for efficiency to achieve parts with no or minimum defects. Deep drawn parts are used in high performance, high strength and high reliability applications where tension, stress, load and human safety are critical considerations. Wrinkling is a kind of defect caused by stresses in the flange part of the blank during metal forming operations. To avoid wrinkling appropriate blank-holder pressure/force or drawbead can be applied. Now-a-day computer simulation plays a vital role in the field of manufacturing process. So computer simulation of manufacturing has much advantage over previous conventional process i.e. mass production, good quality of product, fast working etc. In this study, a two dimensional elasto-plastic Finite Element (F.E.) model for Mild Steel material blank has been developed to study the behavior of the flange wrinkling and deep drawing parameters under different Blank-Holder Pressure (B.H.P.). For this, commercially available Finite Element software ANSYS 14 has been used in this study. Simulation results are critically studied and salient conclusions have been drawn.

Keywords: ANSYS, deep drawing, BHP, finite element simulation, wrinkling

Procedia PDF Downloads 447
26992 Artificial Neural Network-Based Prediction of Effluent Quality of Wastewater Treatment Plant Employing Data Preprocessing Approaches

Authors: Vahid Nourani, Atefeh Ashrafi

Abstract:

Prediction of treated wastewater quality is a matter of growing importance in water treatment procedure. In this way artificial neural network (ANN), as a robust data-driven approach, has been widely used for forecasting the effluent quality of wastewater treatment. However, developing ANN model based on appropriate input variables is a major concern due to the numerous parameters which are collected from treatment process and the number of them are increasing in the light of electronic sensors development. Various studies have been conducted, using different clustering methods, in order to classify most related and effective input variables. This issue has been overlooked in the selecting dominant input variables among wastewater treatment parameters which could effectively lead to more accurate prediction of water quality. In the presented study two ANN models were developed with the aim of forecasting effluent quality of Tabriz city’s wastewater treatment plant. Biochemical oxygen demand (BOD) was utilized to determine water quality as a target parameter. Model A used Principal Component Analysis (PCA) for input selection as a linear variance-based clustering method. Model B used those variables identified by the mutual information (MI) measure. Therefore, the optimal ANN structure when the result of model B compared with model A showed up to 15% percent increment in Determination Coefficient (DC). Thus, this study highlights the advantage of PCA method in selecting dominant input variables for ANN modeling of wastewater plant efficiency performance.

Keywords: Artificial Neural Networks, biochemical oxygen demand, principal component analysis, mutual information, Tabriz wastewater treatment plant, wastewater treatment plant

Procedia PDF Downloads 128
26991 A Study of Electrowetting-Assisted Mold Filling in Nanoimprint Lithography

Authors: Wei-Hsuan Hsu, Yi-Xuan Huang

Abstract:

Nanoimprint lithography (NIL) possesses the advantages of sub-10-nm feature and low cost. NIL patterns the resist with physical deformation using a mold, which can easily reproduce the required nano-scale pattern. However, the variation of process parameters and environmental conditions seriously affect reproduction quality. How to ensure the quality of imprinted pattern is essential for industry. In this study, the authors used the electrowetting technology to assist mold filling in the NIL process. A special mold structure was designed to cause electrowetting. During the imprinting process, when a voltage was applied between the mold and substrate, the hydrophilicity/hydrophobicity of the surface of the mold can be converted. Both simulation and experiment confirmed that the electrowetting technology can assist mold filling and avoid incomplete filling rate. The proposed method can also reduce the crack formation during the de-molding process. Therefore, electrowetting technology can improve the process quality of NIL.

Keywords: electrowetting, mold filling, nano-imprint, surface modification

Procedia PDF Downloads 170
26990 Processing of Input Material as a Way to Improve the Efficiency of the Glass Production Process

Authors: Joanna Rybicka-Łada, Magda Kosmal, Anna Kuśnierz

Abstract:

One of the main problems of the glass industry is the still high consumption of energy needed to produce glass mass, as well as the increase in prices, fuels, and raw materials. Therefore, comprehensive actions are taken to improve the entire production process. The key element of these activities, starting from filling the set to receiving the finished product, is the melting process, whose task is, among others, dissolving the components of the set, removing bubbles from the resulting melt, and obtaining a chemically homogeneous glass melt. This solution avoids dust formation during filling and is available on the market. This process consumes over 90% of the total energy needed in the production process. The processes occurring in the set during its conversion have a significant impact on the further stages and speed of the melting process and, thus, on its overall effectiveness. The speed of the reactions occurring and their course depend on the chemical nature of the raw materials, the degree of their fragmentation, thermal treatment as well as the form of the introduced set. An opportunity to minimize segregation and accelerate the conversion of glass sets may be the development of new technologies for preparing and dosing sets. The previously preferred traditional method of melting the set, based on mixing all glass raw materials together in loose form, can be replaced with a set in a thickened form. The aim of the project was to develop a glass set in a selectively or completely densified form and to examine the influence of set processing on the melting process and the properties of the glass.

Keywords: glass, melting process, glass set, raw materials

Procedia PDF Downloads 59
26989 Monitoring Prospective Sites for Water Harvesting Structures Using Remote Sensing and Geographic Information Systems-Based Modeling in Egypt

Authors: Shereif. H. Mahmoud

Abstract:

Egypt has limited water resources, and it will be under water stress by the year 2030. Therefore, Egypt should consider natural and non-conventional water resources to overcome such a problem. Rain harvesting is one solution. This Paper presents a geographic information system (GIS) methodology - based on decision support system (DSS) that uses remote sensing data, filed survey, and GIS to identify potential RWH areas. The input into the DSS includes a map of rainfall surplus, slope, potential runoff coefficient (PRC), land cover/use, soil texture. In addition, the outputs are map showing potential sites for RWH. Identifying suitable RWH sites implemented in the ArcGIS model environment using the model builder of ArcGIS 10.1. Based on Analytical hierarchy process (AHP) analysis taking into account five layers, the spatial extents of RWH suitability areas identified using Multi-Criteria Evaluation (MCE). The suitability model generated a suitability map for RWH with four suitability classes, i.e. Excellent, Moderate, Poor, and unsuitable. The spatial distribution of the suitability map showed that the excellent suitable areas for RWH concentrated in the northern part of Egypt. According to their averages, 3.24% of the total area have excellent and good suitability for RWH, while 45.04 % and 51.48 % of the total area are moderate and unsuitable suitability, respectively. The majority of the areas with excellent suitability have slopes between 2 and 8% and with an intensively cultivated area. The major soil type in the excellent suitable area is loam and the rainfall range from 100 up to 200 mm. Validation of the used technique depends on comparing existing RWH structures locations with the generated suitability map using proximity analysis tool of ArcGIS 10.1. The result shows that most of exiting RWH structures categorized as successful.

Keywords: rainwater harvesting (RWH), geographic information system (GIS), analytical hierarchy process (AHP), multi-criteria evaluation (MCE), decision support system (DSS)

Procedia PDF Downloads 358
26988 Computing Machinery and Legal Intelligence: Towards a Reflexive Model for Computer Automated Decision Support in Public Administration

Authors: Jacob Livingston Slosser, Naja Holten Moller, Thomas Troels Hildebrandt, Henrik Palmer Olsen

Abstract:

In this paper, we propose a model for human-AI interaction in public administration that involves legal decision-making. Inspired by Alan Turing’s test for machine intelligence, we propose a way of institutionalizing a continuous working relationship between man and machine that aims at ensuring both good legal quality and higher efficiency in decision-making processes in public administration. We also suggest that our model enhances the legitimacy of using AI in public legal decision-making. We suggest that case loads in public administration could be divided between a manual and an automated decision track. The automated decision track will be an algorithmic recommender system trained on former cases. To avoid unwanted feedback loops and biases, part of the case load will be dealt with by both a human case worker and the automated recommender system. In those cases an experienced human case worker will have the role of an evaluator, choosing between the two decisions. This model will ensure that the algorithmic recommender system is not compromising the quality of the legal decision making in the institution. It also enhances the legitimacy of using algorithmic decision support because it provides justification for its use by being seen as superior to human decisions when the algorithmic recommendations are preferred by experienced case workers. The paper outlines in some detail the process through which such a model could be implemented. It also addresses the important issue that legal decision making is subject to legislative and judicial changes and that legal interpretation is context sensitive. Both of these issues requires continuous supervision and adjustments to algorithmic recommender systems when used for legal decision making purposes.

Keywords: administrative law, algorithmic decision-making, decision support, public law

Procedia PDF Downloads 216
26987 Evaluation of Free Technologies as Tools for Business Process Management

Authors: Julio Sotomayor, Daniel Yucra, Jorge Mayhuasca

Abstract:

The article presents an evaluation of free technologies for business process automation, with emphasis only on tools compatible with the general public license (GPL). The compendium of technologies was based on promoting a service-oriented enterprise architecture (SOA) and the establishment of a business process management system (BPMS). The methodology for the selection of tools was Agile UP. This proposal allows businesses to achieve technological sovereignty and independence, in addition to the promotion of service orientation and the development of free software based on components.

Keywords: BPM, BPMS suite, open-source software, SOA, enterprise architecture, business process management

Procedia PDF Downloads 287
26986 Hidden Oscillations in the Mathematical Model of the Optical Binary Phase Shift Keying (BPSK) Costas Loop

Authors: N. V. Kuznetsov, O. A. Kuznetsova, G. A. Leonov, M. V. Yuldashev, R. V. Yuldashev

Abstract:

Nonlinear analysis of the phase locked loop (PLL)-based circuits is a challenging task. Thus, the simulation is widely used for their study. In this work, we consider a mathematical model of the optical Costas loop and demonstrate the limitations of simulation approach related to the existence of so-called hidden oscillations in the phase space of the model.

Keywords: optical Costas loop, mathematical model, simulation, hidden oscillation

Procedia PDF Downloads 439
26985 Modification of Fick’s First Law by Introducing the Time Delay

Authors: H. Namazi, H. T. N. Kuan

Abstract:

Fick's first law relates the diffusive flux to the concentration field, by postulating that the flux goes from regions of high concentration to regions of low concentration, with a magnitude that is proportional to the concentration gradient (spatial derivative). It is clear that the diffusion of flux cannot be instantaneous and should be some time delay in this propagation. But Fick’s first law doesn’t consider this delay which results in some errors especially when there is a considerable time delay in the process. In this paper, we introduce a time delay to Fick’s first law. By this modification, we consider that the diffusion of flux cannot be instantaneous. In order to verify this claim an application sample in fluid diffusion is discussed and the results of modified Fick’s first law, Fick’s first law and the experimental results are compared. The results of this comparison stand for the accuracy of the modified model. The modified model can be used in any application where the time delay has considerable value and neglecting its effect reflects in undesirable results.

Keywords: Fick's first law, flux, diffusion, time delay, modified Fick’s first law

Procedia PDF Downloads 405
26984 The Curvature of Bending Analysis and Motion of Soft Robotic Fingers by Full 3D Printing with MC-Cells Technique for Hand Rehabilitation

Authors: Chaiyawat Musikapan, Ratchatin Chancharoen, Saknan Bongsebandhu-Phubhakdi

Abstract:

For many recent years, soft robotic fingers were used for supporting the patients who had survived the neurological diseases that resulted in muscular disorders and neural network damages, such as stroke and Parkinson’s disease, and inflammatory symptoms such as De Quervain and trigger finger. Generally, the major hand function is significant to manipulate objects in activities of daily living (ADL). In this work, we proposed the model of soft actuator that manufactured by full 3D printing without the molding process and one material for use. Furthermore, we designed the model with a technique of multi cavitation cells (MC-Cells). Then, we demonstrated the curvature bending, fluidic pressure and force that generated to the model for assistive finger flexor and hand grasping. Also, the soft actuators were characterized in mathematics solving by the length of chord and arc length. In addition, we used an adaptive push-button switch machine to measure the force in our experiment. Consequently, we evaluated biomechanics efficiency by the range of motion (ROM) that affected to metacarpophalangeal joint (MCP), proximal interphalangeal joint (PIP) and distal interphalangeal joint (DIP). Finally, the model achieved to exhibit the corresponding fluidic pressure with force and ROM to assist the finger flexor and hand grasping.

Keywords: biomechanics efficiency, curvature bending, hand functional assistance, multi cavitation cells (MC-Cells), range of motion (ROM)

Procedia PDF Downloads 259
26983 Use of Diatomite for the Elimination of Chromium Three from Wastewater Annaba, Algeria

Authors: Sabiha Chouchane, Toufik Chouchane, Azzedine Hani

Abstract:

The wastewater was treated with a natural asorbent “Diatomite” to eliminate chromium three. Diatomite is an element that comes from Sig (west of Algeria). The physicochemical characterization revealed that the diatomite is mainly made up of silica, lime and a lower degree of alumina. The process considered in static regime, at 20°C, an ion stirring speed of 150 rpm, a pH = 4 and a grain diameter of between 100 and 150µm, shows that one gram of diatomite purified can fix according to the Langmuir model up to 39.64 mg/g of chromium with pseudo 1st order kinetics. The pseudo-equilibrium time highlighted is 25 minutes. The affinity between the adsorbent and the adsorbate follows the value of the RL ratio indicates us that the solid used has a good adsorption capacity. The external transport of the metal ions from the solution to the adsorbent seems to be a step controlling the speed of the overall process. On the other hand, internal transport in the pores is not the only limiting mechanism of sorption kinetics. Thermodynamic parameters show that chromium sorption is spontaneous and exothermic with negative entropy.

Keywords: adsorption, diatomite, crIII, wastewater

Procedia PDF Downloads 53