Search results for: Porter's diamond model
12397 Evaluation of the Effect of Lactose Derived Monosaccharide on Galactooligosaccharides Production by β-Galactosidase
Authors: Yenny Paola Morales Cortés, Fabián Rico Rodríguez, Juan Carlos Serrato Bermúdez, Carlos Arturo Martínez Riascos
Abstract:
Numerous benefits of galactooligosaccharides (GOS) as prebiotics have motivated the study of enzymatic processes for their production. These processes have special complexities due to several factors that make difficult high productivity, such as enzyme type, reaction medium pH, substrate concentrations and presence of inhibitors, among others. In the present work the production of galactooligosaccharides (with different degrees of polymerization: two, three and four) from lactose was studied. The study considers the formulation of a mathematical model that predicts the production of GOS from lactose using the enzyme β-galactosidase. The effect of pH in the reaction was studied. For that, phosphate buffer was used and with this was evaluated three pH values (6.0.6.5 and 7.0). Thus it was observed that at pH 6.0 the enzymatic activity insignificant. On the other hand, at pH 7.0 the enzymatic activity was approximately 27 times greater than at 6.5. The last result differs from previously reported results. Therefore, pH 7.0 was chosen as working pH. Additionally, the enzyme concentration was analyzed, which allowed observing that the effect of the concentration depends on the pH and the concentration was set for the following studies in 0.272 mM. Afterwards, experiments were performed varying the lactose concentration to evaluate its effects on the process and to generate the data for the adjustment of the mathematical model parameters. The mathematical model considers the reactions of lactose hydrolysis and transgalactosylation for the production of disaccharides and trisaccharides, with their inverse reactions. The production of tetrasaccharides was negligible and, because of that, it was not included in the model. The reaction was monitored by HPLC and for the quantitative analysis of the experimental data the Matlab programming language was used, including solvers for differential equations systems integration (ode15s) and nonlinear problems optimization (fminunc). The results confirm that the transgalactosylation and hydrolysis reactions are reversible, additionally inhibition by glucose and galactose is observed on the production of GOS. In relation to the production process of galactooligosaccharides, the results show that it is necessary to have high initial concentrations of lactose considering that favors the transgalactosylation reaction, while low concentrations favor hydrolysis reactions.Keywords: β-galactosidase, galactooligosaccharides, inhibition, lactose, Matlab, modeling
Procedia PDF Downloads 35812396 Risk Assessment of Natural Gas Pipelines in Coal Mined Gobs Based on Bow-Tie Model and Cloud Inference
Authors: Xiaobin Liang, Wei Liang, Laibin Zhang, Xiaoyan Guo
Abstract:
Pipelines pass through coal mined gobs inevitably in the mining area, the stability of which has great influence on the safety of pipelines. After extensive literature study and field research, it was found that there are a few risk assessment methods for coal mined gob pipelines, and there is a lack of data on the gob sites. Therefore, the fuzzy comprehensive evaluation method is widely used based on expert opinions. However, the subjective opinions or lack of experience of individual experts may lead to inaccurate evaluation results. Hence the accuracy of the results needs to be further improved. This paper presents a comprehensive approach to achieve this purpose by combining bow-tie model and cloud inference. The specific evaluation process is as follows: First, a bow-tie model composed of a fault tree and an event tree is established to graphically illustrate the probability and consequence indicators of pipeline failure. Second, the interval estimation method can be scored in the form of intervals to improve the accuracy of the results, and the censored mean algorithm is used to remove the maximum and minimum values of the score to improve the stability of the results. The golden section method is used to determine the weight of the indicators and reduce the subjectivity of index weights. Third, the failure probability and failure consequence scores of the pipeline are converted into three numerical features by using cloud inference. The cloud inference can better describe the ambiguity and volatility of the results which can better describe the volatility of the risk level. Finally, the cloud drop graphs of failure probability and failure consequences can be expressed, which intuitively and accurately illustrate the ambiguity and randomness of the results. A case study of a coal mine gob pipeline carrying natural gas has been investigated to validate the utility of the proposed method. The evaluation results of this case show that the probability of failure of the pipeline is very low, the consequences of failure are more serious, which is consistent with the reality.Keywords: bow-tie model, natural gas pipeline, coal mine gob, cloud inference
Procedia PDF Downloads 25012395 Vertical Uplift Capacity of a Group of Equally Spaced Helical Screw Anchors in Sand
Authors: Sanjeev Mukherjee, Satyendra Mittal
Abstract:
This paper presents the experimental investigations on the behaviour of a group of single, double and triple helical screw anchors embedded vertically at the same level in sand. The tests were carried out on one, two, three and four numbers of anchors in sand for different depths of embedment keeping shallow and deep mode of behaviour in mind. The testing program included 48 tests conducted on three model anchors installed in sand whose density kept constant throughout the tests. It was observed that the ultimate pullout load varied significantly with the installation depth of the anchor and the number of anchors. The apparent coefficient of friction (f*) between anchor and soil was also calculated based on the test results. It was found that the apparent coefficient of friction varies between 1.02 and 4.76 for 1, 2, 3, and 4 numbers of single, double and triple helical screw anchors. Plate load tests conducted on model soil showed that the value of ф increases from 35o for virgin soil to 48o for soil with four double screw helical anchors. The graphs of ultimate pullout capacity of a group of two, three and four no. of anchors with respect to one anchor were plotted and design equations have been proposed correlating them. Based on these findings, it has been concluded that the load-displacement relationships for all groups can be reduced to a common curve. A 3-D finite element model, PLAXIS, was used to confirm the results obtained from laboratory tests and the agreement is excellent.Keywords: apparent coefficient of friction, helical screw anchor, installation depth, plate load test
Procedia PDF Downloads 55512394 Application of Griddization Management to Construction Hazard Management
Authors: Lingzhi Li, Jiankun Zhang, Tiantian Gu
Abstract:
Hazard management that can prevent fatal accidents and property losses is a fundamental process during the buildings’ construction stage. However, due to lack of safety supervision resources and operational pressures, the conduction of hazard management is poor and ineffective in China. In order to improve the quality of construction safety management, it is critical to explore the use of information technologies to ensure that the process of hazard management is efficient and effective. After exploring the existing problems of construction hazard management in China, this paper develops the griddization management model for construction hazard management. First, following the knowledge grid infrastructure, the griddization computing infrastructure for construction hazards management is designed which includes five layers: resource entity layer, information management layer, task management layer, knowledge transformation layer and application layer. This infrastructure will be as the technical support for realizing grid management. Second, this study divides the construction hazards into grids through city level, district level and construction site level according to grid principles. Last, a griddization management process including hazard identification, assessment and control is developed. Meanwhile, all stakeholders of construction safety management, such as owners, contractors, supervision organizations and government departments, should take the corresponding responsibilities in this process. Finally, a case study based on actual construction hazard identification, assessment and control is used to validate the effectiveness and efficiency of the proposed griddization management model. The advantage of this designed model is to realize information sharing and cooperative management between various safety management departments.Keywords: construction hazard, griddization computing, grid management, process
Procedia PDF Downloads 27512393 A Comparative Study of Various Control Methods for Rendezvous of a Satellite Couple
Authors: Hasan Basaran, Emre Unal
Abstract:
Formation flying of satellites is a mission that involves a relative position keeping of different satellites in the constellation. In this study, different control algorithms are compared with one another in terms of ΔV, velocity increment, and tracking error. Various control methods, covering continuous and impulsive approaches are implemented and tested for satellites flying in low Earth orbit. Feedback linearization, sliding mode control, and model predictive control are designed and compared with an impulsive feedback law, which is based on mean orbital elements. Feedback linearization and sliding mode control approaches have identical mathematical models that include second order Earth oblateness effects. The model predictive control, on the other hand, does not include any perturbations and assumes circular chief orbit. The comparison is done with 4 different initial errors and achieved with velocity increment, root mean square error, maximum steady state error, and settling time. It was observed that impulsive law consumed the least ΔV, while produced the highest maximum error in the steady state. The continuous control laws, however, consumed higher velocity increments and produced lower amounts of tracking errors. Finally, the inversely proportional relationship between tracking error and velocity increment was established.Keywords: chief-deputy satellites, feedback linearization, follower-leader satellites, formation flight, fuel consumption, model predictive control, rendezvous, sliding mode
Procedia PDF Downloads 10512392 The Importance of including All Data in a Linear Model for the Analysis of RNAseq Data
Authors: Roxane A. Legaie, Kjiana E. Schwab, Caroline E. Gargett
Abstract:
Studies looking at the changes in gene expression from RNAseq data often make use of linear models. It is also common practice to focus on a subset of data for a comparison of interest, leaving aside the samples not involved in this particular comparison. This work shows the importance of including all observations in the modeling process to better estimate variance parameters, even when the samples included are not directly used in the comparison under test. The human endometrium is a dynamic tissue, which undergoes cycles of growth and regression with each menstrual cycle. The mesenchymal stem cells (MSCs) present in the endometrium are likely responsible for this remarkable regenerative capacity. However recent studies suggest that MSCs also plays a role in the pathogenesis of endometriosis, one of the most common medical conditions affecting the lower abdomen in women in which the endometrial tissue grows outside the womb. In this study we compared gene expression profiles between MSCs and non-stem cell counterparts (‘non-MSC’) obtained from women with (‘E’) or without (‘noE’) endometriosis from RNAseq. Raw read counts were used for differential expression analysis using a linear model with the limma-voom R package, including either all samples in the study or only the samples belonging to the subset of interest (e.g. for the comparison ‘E vs noE in MSC cells’, including only MSC samples from E and noE patients but not the non-MSC ones). Using the full dataset we identified about 100 differentially expressed (DE) genes between E and noE samples in MSC samples (adj.p-val < 0.05 and |logFC|>1) while only 9 DE genes were identified when using only the subset of data (MSC samples only). Important genes known to be involved in endometriosis such as KLF9 and RND3 were missed in the latter case. When looking at the MSC vs non-MSC cells comparison, the linear model including all samples identified 260 genes for noE samples (including the stem cell marker SUSD2) while the subset analysis did not identify any DE genes. When looking at E samples, 12 genes were identified with the first approach and only 1 with the subset approach. Although the stem cell marker RGS5 was found in both cases, the subset test missed important genes involved in stem cell differentiation such as NOTCH3 and other potentially related genes to be used for further investigation and pathway analysis.Keywords: differential expression, endometriosis, linear model, RNAseq
Procedia PDF Downloads 43212391 Multiparametric Optimization of Water Treatment Process for Thermal Power Plants
Authors: Balgaisha Mukanova, Natalya Glazyrina, Sergey Glazyrin
Abstract:
The formulated problem of optimization of the technological process of water treatment for thermal power plants is considered in this article. The problem is of multiparametric nature. To optimize the process, namely, reduce the amount of waste water, a new technology was developed to reuse such water. A mathematical model of the technology of wastewater reuse was developed. Optimization parameters were determined. The model consists of a material balance equation, an equation describing the kinetics of ion exchange for the non-equilibrium case and an equation for the ion exchange isotherm. The material balance equation includes a nonlinear term that depends on the kinetics of ion exchange. A direct problem of calculating the impurity concentration at the outlet of the water treatment plant was numerically solved. The direct problem was approximated by an implicit point-to-point computation difference scheme. The inverse problem was formulated as relates to determination of the parameters of the mathematical model of the water treatment plant operating in non-equilibrium conditions. The formulated inverse problem was solved. Following the results of calculation the time of start of the filter regeneration process was determined, as well as the period of regeneration process and the amount of regeneration and wash water. Multi-parameter optimization of water treatment process for thermal power plants allowed decreasing the amount of wastewater by 15%.Keywords: direct problem, multiparametric optimization, optimization parameters, water treatment
Procedia PDF Downloads 38712390 A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics
Authors: Hui Zhang, Ye Tian, Fang Ye, Ziming Guo
Abstract:
Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%.Keywords: communication signal, feature extraction, Holder coefficient, improved cloud model
Procedia PDF Downloads 15612389 Signature Verification System for a Banking Business Process Management
Authors: A. Rahaf, S. Liyakathunsia
Abstract:
In today’s world, unprecedented operational pressure is faced by banks that test the efficiency, effectiveness, and agility of their business processes. In a typical banking process, a person’s authorization is usually based on his signature on most all of the transactions. Signature verification is considered as one of the highly significant information needed for any bank document processing. Banks usually use Signature Verification to authenticate the identity of individuals. In this paper, a business process model has been proposed in order to increase the quality of the verification process and to reduce time and needed resources. In order to understand the current process, a survey has been conducted and distributed among bank employees. After analyzing the survey, a process model has been created using Bizagi modeler which helps in simulating the process after assigning time and cost of it. The outcomes show that the automation of signature verification process is highly recommended for a banking business process.Keywords: business process management, process modeling, quality, Signature Verification
Procedia PDF Downloads 42712388 Three-Dimensional Finite Element Analysis of Geogrid-Reinforced Piled Embankments on Soft Clay
Authors: Mahmoud Y. Shokry, Rami M. El-Sherbiny
Abstract:
This paper aims to highlight the role of some parameters that may be of a noticeable impact on numerical analysis/design of embankments. It presents the results of a three-dimensional (3-D) finite element analysis of a monitored earth embankment that was constructed on soft clay formation stabilized by cast in-situ piles using software PLAXIS 3D. A comparison between the predicted and the monitored responses is presented to assess the adequacy of the adopted numerical model. The model was used in the targeted parametric study. Moreover, a comparison was performed between the results of the 3-D analyses and the analytical solutions. This paper concluded that the effect of using mono pile caps led to decrease both the total and differential settlement and increased the efficiency of the piled embankment system. The study of using geogrids revealed that it can contribute in decreasing the settlement and maximizing the part of the embankment load transferred to piles. Moreover, it was found that increasing the stiffness of the geogrids provides higher values of tensile forces and hence has more effective influence on embankment load carried by piles rather than using multi-number of layers with low values of geogrid stiffness. The efficiency of the piled embankments system was also found to be greater when higher embankments are used rather than the low height embankments. The comparison between the numerical 3-D model and the theoretical design methods revealed that many analytical solutions are conservative and non-accurate rather than the 3-D finite element numerical models.Keywords: efficiency, embankment, geogrids, soft clay
Procedia PDF Downloads 32312387 Mobile Augmented Reality for Collaboration in Operation
Authors: Chong-Yang Qiao
Abstract:
Mobile augmented reality (MAR) tracking targets from the surroundings and aids operators for interactive data and procedures visualization, potential equipment and system understandably. Operators remotely communicate and coordinate with each other for the continuous tasks, information and data exchange between control room and work-site. In the routine work, distributed control system (DCS) monitoring and work-site manipulation require operators interact in real-time manners. The critical question is the improvement of user experience in cooperative works through applying Augmented Reality in the traditional industrial field. The purpose of this exploratory study is to find the cognitive model for the multiple task performance by MAR. In particular, the focus will be on the comparison between different tasks and environment factors which influence information processing. Three experiments use interface and interaction design, the content of start-up, maintenance and stop embedded in the mobile application. With the evaluation criteria of time demands and human errors, and analysis of the mental process and the behavior action during the multiple tasks, heuristic evaluation was used to find the operators performance with different situation factors, and record the information processing in recognition, interpretation, judgment and reasoning. The research will find the functional properties of MAR and constrain the development of the cognitive model. Conclusions can be drawn that suggest MAR is easy to use and useful for operators in the remote collaborative works.Keywords: mobile augmented reality, remote collaboration, user experience, cognition model
Procedia PDF Downloads 19712386 Global Direct Search Optimization of a Tuned Liquid Column Damper Subject to Stochastic Load
Authors: Mansour H. Alkmim, Adriano T. Fabro, Marcus V. G. De Morais
Abstract:
In this paper, a global direct search optimization algorithm to reduce vibration of a tuned liquid column damper (TLCD), a class of passive structural control device, is presented. The objective is to find optimized parameters for the TLCD under stochastic load from different wind power spectral density. A verification is made considering the analytical solution of an undamped primary system under white noise excitation. Finally, a numerical example considering a simplified wind turbine model is given to illustrate the efficacy of the TLCD. Results from the random vibration analysis are shown for four types of random excitation wind model where the response PSDs obtained showed good vibration attenuation.Keywords: generalized pattern search, parameter optimization, random vibration analysis, vibration suppression
Procedia PDF Downloads 27512385 Designating and Evaluating a Healthy Eating Model at the Workplace: A Practical Strategy for Preventing Non-Communicable Diseases in Aging
Authors: Mahnaz Khalafehnilsaz, Rozina Rahnama
Abstract:
Introduction: The aging process has been linked to a wide range of non-communicable diseases that cause a loss of health-related quality of life. This process can be worsened if an active and healthy lifestyle is not followed by adults, especially in the workplace. This setting not only may create a sedentary lifestyle but will lead to obesity and overweight in the long term and create unhealthy and inactive aging. In addition, eating habits are always known to be associated with active aging. Therefore, it is very valuable to know the eating patterns of people at work in order to detect and prevent diseases in the coming years. This study aimed to design and test a model to improve eating habits among employees at an industrial complex as a practical strategy. Material and method: The present research was a mixed-method study with a subsequent exploratory design which was carried out in two phases, qualitative and quantitative, in 2018 year. In the first step, participants were selected by purposive sampling (n=34) to ensure representation of different job roles; hours worked, gender, grade, and age groups, and semi-structured interviews were used. All interviews were conducted in the workplace and were audio recorded, transcribed verbatim, and analyzed using the Strauss and Corbin approach. The interview question was, “what were their experiences of eating at work, and how could these nutritional habits affect their health in old age.” Finally, a total of 1500 basic codes were oriented at the open coding step, and they were merged together to create the 17 classes, and six concepts and a conceptual model were designed. The second phase of the study was conducted in the form of a cross-sectional study. After verification of the research tool, the developed questionnaire was examined in a group of employees. In order to test the conceptual model of the study, a total of 500 subjects were included in psychometry. Findings: Six main concepts have been known, including 1. undesirable control of stress, 2. lack of eating knowledge, 3. effect of the social network, 4. lack of motivation for healthy habits, 5. environmental-organizational intensifier, 6. unhealthy eating behaviors. The core concept was “Motivation Loss to do preventive behavior.” The main constructs of the motivational-based model for the promotion of eating habits are “modification and promote of eating habits,” increase of knowledge and competency, convey of healthy nutrition behavior culture and effecting of behavioral model especially in older age, desirable of control stress. Conclusion: A key factor for unhealthy eating behavior at the workplace is a lack of motivation, which can be an obstacle to conduct preventive behaviors at work that can affect the healthy aging process in the long term. The motivational-based model could be considered an effective conceptual framework and instrument for designing interventions for the promotion to create healthy and active aging.Keywords: aging, eating habits, older age, workplace
Procedia PDF Downloads 10112384 Predictions for the Anisotropy in Thermal Conductivity in Polymers Subjected to Model Flows by Combination of the eXtended Pom-Pom Model and the Stress-Thermal Rule
Authors: David Nieto Simavilla, Wilco M. H. Verbeeten
Abstract:
The viscoelastic behavior of polymeric flows under isothermal conditions has been extensively researched. However, most of the processing of polymeric materials occurs under non-isothermal conditions and understanding the linkage between the thermo-physical properties and the process state variables remains a challenge. Furthermore, the cost and energy required to manufacture, recycle and dispose polymers is strongly affected by the thermo-physical properties and their dependence on state variables such as temperature and stress. Experiments show that thermal conductivity in flowing polymers is anisotropic (i.e. direction dependent). This phenomenon has been previously omitted in the study and simulation of industrially relevant flows. Our work combines experimental evidence of a universal relationship between thermal conductivity and stress tensors (i.e. the stress-thermal rule) with differential constitutive equations for the viscoelastic behavior of polymers to provide predictions for the anisotropy in thermal conductivity in uniaxial, planar, equibiaxial and shear flow in commercial polymers. A particular focus is placed on the eXtended Pom-Pom model which is able to capture the non-linear behavior in both shear and elongation flows. The predictions provided by this approach are amenable to implementation in finite elements packages, since viscoelastic and thermal behavior can be described by a single equation. Our results include predictions for flow-induced anisotropy in thermal conductivity for low and high density polyethylene as well as confirmation of our method through comparison with a number of thermoplastic systems for which measurements of anisotropy in thermal conductivity are available. Remarkably, this approach allows for universal predictions of anisotropy in thermal conductivity that can be used in simulations of complex flows in which only the most fundamental rheological behavior of the material has been previously characterized (i.e. there is no need for additional adjusting parameters other than those in the constitutive model). Accounting for polymers anisotropy in thermal conductivity in industrially relevant flows benefits the optimization of manufacturing processes as well as the mechanical and thermal performance of finalized plastic products during use.Keywords: anisotropy, differential constitutive models, flow simulations in polymers, thermal conductivity
Procedia PDF Downloads 18212383 Speed Control of Brushless DC Motor Using PI Controller in MATLAB Simulink
Authors: Do Chi Thanh, Dang Ngoc Huy
Abstract:
Nowadays, there are more and more variable speed drive systems in small-scale and large-scale applications such as the electric vehicle industry, household appliances, medical equipment, and other industrial fields led to the development of BLDC (Brushless DC) motors. BLDC drive has many advantages, such as higher efficiency, better speed torque characteristics, high power density, and low maintenance cost compared to other conventional motors. Most BLDC motors use a proportional-integral (PI) controller and a pulse width modulation (PWM) scheme for speed control. This article describes the simulation model of BLDC motor drive control with the help of MATLAB - SIMULINK simulation software. The built simulation model includes a BLDC motor dynamic block, Hall sensor signal generation block, inverter converter block, and PI controller.Keywords: brushless DC motor, BLDC, six-step inverter, PI speed
Procedia PDF Downloads 7412382 Analysis Rotor Bearing System Dynamic Interaction with Bearing Supports
Abstract:
Frequently, in the design of machines, some of parameters that directly affect the rotor dynamics of the machines are not accurately known. In particular, bearing stiffness support is one such parameter. One of the most basic principles to grasp in rotor dynamics is the influence of the bearing stiffness on the critical speeds and mode shapes associated with a rotor-bearing system. Taking a rig shafting as an example, this paper studies the lateral vibration of the rotor with multi-degree-of-freedom by using Finite Element Method (FEM). The FEM model is created and the eigenvalues and eigenvectors are calculated and analyzed to find natural frequencies, critical speeds, mode shapes. Then critical speeds and mode shapes are analyzed by set bearing stiffness changes. The model permitted to identify the critical speeds and bearings that have an important influence on the vibration behavior.Keywords: lateral vibration, finite element method, rig shafting, critical speed
Procedia PDF Downloads 34012381 Modeling Core Flooding Experiments for Co₂ Geological Storage Applications
Authors: Avinoam Rabinovich
Abstract:
CO₂ geological storage is a proven technology for reducing anthropogenic carbon emissions, which is paramount for achieving the ambitious net zero emissions goal. Core flooding experiments are an important step in any CO₂ storage project, allowing us to gain information on the flow of CO₂ and brine in the porous rock extracted from the reservoir. This information is important for understanding basic mechanisms related to CO₂ geological storage as well as for reservoir modeling, which is an integral part of a field project. In this work, a different method for constructing accurate models of CO₂-brine core flooding will be presented. Results for synthetic cases and real experiments will be shown and compared with numerical models to exhibit their predictive capabilities. Furthermore, the various mechanisms which impact the CO₂ distribution and trapping in the rock samples will be discussed, and examples from models and experiments will be provided. The new method entails solving an inverse problem to obtain a three-dimensional permeability distribution which, along with the relative permeability and capillary pressure functions, constitutes a model of the flow experiments. The model is more accurate when data from a number of experiments are combined to solve the inverse problem. This model can then be used to test various other injection flow rates and fluid fractions which have not been tested in experiments. The models can also be used to bridge the gap between small-scale capillary heterogeneity effects (sub-core and core scale) and large-scale (reservoir scale) effects, known as the upscaling problem.Keywords: CO₂ geological storage, residual trapping, capillary heterogeneity, core flooding, CO₂-brine flow
Procedia PDF Downloads 7012380 Determination of Effect Factor for Effective Parameter on Saccharification of Lignocellulosic Material by Concentrated Acid
Authors: Sina Aghili, Ali Arasteh Nodeh
Abstract:
Tamarisk usage as a new group of lignocelluloses material to produce fermentable sugars in bio-ethanol process was studied. The overall aim of this work was to establish the optimum condition for acid hydrolysis of this new material and a mathematical model predicting glucose release as a function of operation variable. Sulfuric acid concentration in the range of 20 to 60%(w/w), process temperature between 60 to 95oC, hydrolysis time from 120 to 240 min and solid content 5,10,15%(w/w) were used as hydrolysis conditions. HPLC was used to analysis of the product. This analysis indicated that glucose was the main fermentable sugar and was increased with time, temperature and solid content and acid concentration was a parabola influence in glucose production.The process was modeled by a quadratic equation. Curve study and model were found that 42% acid concentration, 15 % solid content and 90oC were in optimum condition.Keywords: fermentable sugar, saccharification, wood, hydrolysis
Procedia PDF Downloads 33412379 Colour and Curcuminoids Removal from Turmeric Wastewater Using Activated Carbon Adsorption
Authors: Nattawat Thongpraphai, Anusorn Boonpoke
Abstract:
This study aimed to determine the removal of colour and curcuminoids from turmeric wastewater using granular activated carbon (GAC) adsorption. The adsorption isotherm and kinetic behavior of colour and curcuminoids was invested using batch and fixed bed columns tests. The results indicated that the removal efficiency of colour and curcuminoids were 80.13 and 78.64%, respectively at 8 hr of equilibrium time. The adsorption isotherm of colour and curcuminoids were well fitted with the Freundlich adsorption model. The maximum adsorption capacity of colour and curcuminoids were 130 Pt-Co/g and 17 mg/g, respectively. The continuous experiment data showed that the exhaustion concentration of colour and curcuminoids occurred at 39 hr of operation time. The adsorption characteristic of colour and curcuminoids from turmeric wastewater by GAC can be described by the Thomas model. The maximum adsorption capacity obtained from kinetic approach were 39954 Pt-Co/g and 0.0516 mg/kg for colour and curcuminoids, respectively. Moreover, the decrease of colour and curcuminoids concentration during the service time showed a similar trend.Keywords: adsorption, turmeric, colour, curcuminoids, activated carbon
Procedia PDF Downloads 42412378 Robust Fault Diagnosis for Wind Turbine Systems Subjected to Multi-Faults
Authors: Sarah Odofin, Zhiwei Gao, Sun Kai
Abstract:
Operations, maintenance and reliability of wind turbines have received much attention over the years due to rapid expansion of wind farms. This paper explores early fault diagnosis scale technique based on a unique scheme of a 5MW wind turbine system that is optimized by genetic algorithm to be very sensitive to faults and resilient to disturbances. A quantitative model based analysis is pragmatic for primary fault diagnosis monitoring assessment to minimize downtime mostly caused by components breakdown and exploit productivity consistency. Simulation results are computed validating the wind turbine model which demonstrates system performance in a practical application of fault type examples. The results show the satisfactory effectiveness of the applied performance investigated in a Matlab/Simulink/Gatool environment.Keywords: disturbance robustness, fault monitoring and detection, genetic algorithm, observer technique
Procedia PDF Downloads 38012377 Human Errors in IT Services, HFACS Model in Root Cause Categorization
Authors: Kari Saarelainen, Marko Jantti
Abstract:
IT service trending of root causes of service incidents and problems is an important part of proactive problem management and service improvement. Human error related root causes are an important root cause category also in IT service management, although it’s proportion among root causes is smaller than in the other industries. The research problem in this study is: How root causes of incidents related to human errors should be categorized in an ITSM organization to effectively support service improvement. Categorization based on IT service management processes and based on Human Factors Analysis and Classification System (HFACS) taxonomy was studied in a case study. HFACS is widely used in human error root cause categorization across many industries. Combining these two categorization models in a two dimensional matrix was found effective, yet impractical for daily work.Keywords: IT service management, ITIL, incident, problem, HFACS, swiss cheese model
Procedia PDF Downloads 48912376 Fuzzy Logic Control for Flexible Joint Manipulator: An Experimental Implementation
Authors: Sophia Fry, Mahir Irtiza, Alexa Hoffman, Yousef Sardahi
Abstract:
This study presents an intelligent control algorithm for a flexible robotic arm. Fuzzy control is used to control the motion of the arm to maintain the arm tip at the desired position while reducing vibration and increasing the system speed of response. The Fuzzy controller (FC) is based on adding the tip angular position to the arm deflection angle and using their sum as a feedback signal to the control algorithm. This reduces the complexity of the FC in terms of the input variables, number of membership functions, fuzzy rules, and control structure. Also, the design of the fuzzy controller is model-free and uses only our knowledge about the system. To show the efficacy of the FC, the control algorithm is implemented on the flexible joint manipulator (FJM) developed by Quanser. The results show that the proposed control method is effective in terms of response time, overshoot, and vibration amplitude.Keywords: fuzzy logic control, model-free control, flexible joint manipulators, nonlinear control
Procedia PDF Downloads 11812375 A Cost Effective Approach to Develop Mid-Size Enterprise Software Adopted the Waterfall Model
Authors: Mohammad Nehal Hasnine, Md Kamrul Hasan Chayon, Md Mobasswer Rahman
Abstract:
Organizational tendencies towards computer-based information processing have been observed noticeably in the third-world countries. Many enterprises are taking major initiatives towards computerized working environment because of massive benefits of computer-based information processing. However, designing and developing information resource management software for small and mid-size enterprises under budget costs and strict deadline is always challenging for software engineers. Therefore, we introduced an approach to design mid-size enterprise software by using the Waterfall model, which is one of the SDLC (Software Development Life Cycles), in a cost effective way. To fulfill research objectives, in this study, we developed mid-sized enterprise software named “BSK Management System” that assists enterprise software clients with information resource management and perform complex organizational tasks. Waterfall model phases have been applied to ensure that all functions, user requirements, strategic goals, and objectives are met. In addition, Rich Picture, Structured English, and Data Dictionary have been implemented and investigated properly in engineering manner. Furthermore, an assessment survey with 20 participants has been conducted to investigate the usability and performance of the proposed software. The survey results indicated that our system featured simple interfaces, easy operation and maintenance, quick processing, and reliable and accurate transactions.Keywords: end-user application development, enterprise software design, information resource management, usability
Procedia PDF Downloads 43812374 Effects of Mechanical Test and Shape of Grain Boundary on Martensitic Transformation in Fe-Ni-C Steel
Authors: Mounir Gaci, Salim Meziani, Atmane Fouathia
Abstract:
The purpose of the present paper is to model the behavior of metal alloy, type TRIP steel (Transformation Induced Plasticity), during solid/solid phase transition. A two-dimensional micromechanical model is implemented in finite element software (ZEBULON) to simulate the martensitic transformation in Fe-Ni-C steel grain under mechanical tensile stress of 250 MPa. The effects of non-uniform grain boundary and the criterion of mechanical shear load on the transformation and on the TRIP value during martensitic transformation are studied. The suggested mechanical criterion is favourable to the influence of the shear phenomenon on the progression of the martensitic transformation (Magee’s mechanism). The obtained results are in satisfactory agreement with experimental ones and show the influence of the grain boundary shape and the chosen mechanical criterion (SMF) on the transformation parameters.Keywords: martensitic transformation, non-uniform Grain Boundary, TRIP, shear Mechanical force (SMF)
Procedia PDF Downloads 25912373 Budgetary Performance Model for Managing Pavement Maintenance
Authors: Vivek Hokam, Vishrut Landge
Abstract:
An ideal maintenance program for an industrial road network is one that would maintain all sections at a sufficiently high level of functional and structural conditions. However, due to various constraints such as budget, manpower and equipment, it is not possible to carry out maintenance on all the needy industrial road sections within a given planning period. A rational and systematic priority scheme needs to be employed to select and schedule industrial road sections for maintenance. Priority analysis is a multi-criteria process that determines the best ranking list of sections for maintenance based on several factors. In priority setting, difficult decisions are required to be made for selection of sections for maintenance. It is more important to repair a section with poor functional conditions which includes uncomfortable ride etc. or poor structural conditions i.e. sections those are in danger of becoming structurally unsound. It would seem therefore that any rational priority setting approach must consider the relative importance of functional and structural condition of the section. The maintenance priority index and pavement performance models tend to focus mainly on the pavement condition, traffic criteria etc. There is a need to develop the model which is suitably used with respect to limited budget provisions for maintenance of pavement. Linear programming is one of the most popular and widely used quantitative techniques. A linear programming model provides an efficient method for determining an optimal decision chosen from a large number of possible decisions. The optimum decision is one that meets a specified objective of management, subject to various constraints and restrictions. The objective is mainly minimization of maintenance cost of roads in industrial area. In order to determine the objective function for analysis of distress model it is necessary to fix the realistic data into a formulation. Each type of repair is to be quantified in a number of stretches by considering 1000 m as one stretch. A stretch considered under study is having 3750 m length. The quantity has to be put into an objective function for maximizing the number of repairs in a stretch related to quantity. The distress observed in this stretch are potholes, surface cracks, rutting and ravelling. The distress data is measured manually by observing each distress level on a stretch of 1000 m. The maintenance and rehabilitation measured that are followed currently are based on subjective judgments. Hence, there is a need to adopt a scientific approach in order to effectively use the limited resources. It is also necessary to determine the pavement performance and deterioration prediction relationship with more accurate and economic benefits of road networks with respect to vehicle operating cost. The infrastructure of road network should have best results expected from available funds. In this paper objective function for distress model is determined by linear programming and deterioration model considering overloading is discussed.Keywords: budget, maintenance, deterioration, priority
Procedia PDF Downloads 20712372 Simulation of Surface Runoff in Mahabad Dam Basin, Iran
Authors: Leila Khosravi
Abstract:
A major part of the drinking water in North West of Iran is supplied from Mahabad reservoir 80 km northwest of Mahabad. This reservoir collects water from 750 km-catchment which is undergoing accelerated changes due to deforestation and urbanization. The main objective of this study is to develop a catchment modeling platform which translates ongoing land-use changes, soil data, precipitation and evaporation into surface runoff of the river discharging into the reservoir: Soil and Water Assessment Tool, SWAT, model along with hydro -meteorological records of 1997–2011. A variety of statistical indices were used to evaluate the simulation results for both calibration and validation periods; among them, the robust Nash–Sutcliffe coefficients were found to be 0.52 and 0.62 in the calibration and validation periods, respectively. This project has developed a reliable modeling platform with the benchmark land physical conditions of the Mahabad dam basin.Keywords: simulation, surface runoff, Mahabad dam, SWAT model
Procedia PDF Downloads 20612371 Method of Estimating Absolute Entropy of Municipal Solid Waste
Authors: Francis Chinweuba Eboh, Peter Ahlström, Tobias Richards
Abstract:
Entropy, as an outcome of the second law of thermodynamics, measures the level of irreversibility associated with any process. The identification and reduction of irreversibility in the energy conversion process helps to improve the efficiency of the system. The entropy of pure substances known as absolute entropy is determined at an absolute reference point and is useful in the thermodynamic analysis of chemical reactions; however, municipal solid waste (MSW) is a structurally complicated material with unknown absolute entropy. In this work, an empirical model to calculate the absolute entropy of MSW based on the content of carbon, hydrogen, oxygen, nitrogen, sulphur, and chlorine on a dry ash free basis (daf) is presented. The proposed model was derived from 117 relevant organic substances which represent the main constituents in MSW with known standard entropies using statistical analysis. The substances were divided into different waste fractions; namely, food, wood/paper, textiles/rubber and plastics waste and the standard entropies of each waste fraction and for the complete mixture were calculated. The correlation of the standard entropy of the complete waste mixture derived was found to be somsw= 0.0101C + 0.0630H + 0.0106O + 0.0108N + 0.0155S + 0.0084Cl (kJ.K-1.kg) and the present correlation can be used for estimating the absolute entropy of MSW by using the elemental compositions of the fuel within the range of 10.3% ≤ C ≤ 95.1%, 0.0% ≤ H ≤ 14.3%, 0.0% ≤ O ≤ 71.1%, 0.0 ≤ N ≤ 66.7%, 0.0% ≤ S ≤ 42.1%, 0.0% ≤ Cl ≤ 89.7%. The model is also applicable for the efficient modelling of a combustion system in a waste-to-energy plant.Keywords: absolute entropy, irreversibility, municipal solid waste, waste-to-energy
Procedia PDF Downloads 30912370 Understanding Chronic Pain: Missing the Mark
Authors: Rachid El Khoury
Abstract:
Chronic pain is perhaps the most burdensome health issue facing the planet. Our understanding of the pathophysiology of chronic pain has increased substantially over the past 25 years, including but not limited to changes in the brain. However, we still do not know why chronic pain develops in some people and not in others. Most of the recent developments in pain science, that have direct relevance to clinical management, relate to our understanding of the role of the brain, the role of the immune system, or the role of cognitive and behavioral factors. Although the Biopsychosocial model of pain management was presented decades ago, the Bio-reductionist model remains, unfortunately, at the heart of many practices across professional and geographic boundaries. A large body of evidence shows that nociception is neither sufficient nor necessary for pain. Pain is a conscious experience that can certainly be, and often is, associated with nociception, however, always modulated by countless neurobiological, environmental, and cognitive factors. This study will clarify the current misconceptions of chronic pain concepts, and their misperceptions by clinicians. It will also attempt to bridge the considerable gap between what we already know on pain but somehow disregarded, the development in pain science, and clinical practice.Keywords: chronic pain, nociception, biopsychosocial, neuroplasticity
Procedia PDF Downloads 6312369 Derivation of Fragility Functions of Marine Drilling Risers Under Ocean Environment
Authors: Pranjal Srivastava, Piyali Sengupta
Abstract:
The performance of marine drilling risers is crucial in the offshore oil and gas industry to ensure safe drilling operation with minimum downtime. Experimental investigations on marine drilling risers are limited in the literature owing to the expensive and exhaustive test setup required to replicate the realistic riser model and ocean environment in the laboratory. Therefore, this study presents an analytical model of marine drilling riser for determining its fragility under ocean environmental loading. In this study, the marine drilling riser is idealized as a continuous beam having a concentric circular cross-section. Hydrodynamic loading acting on the marine drilling riser is determined by Morison’s equations. By considering the equilibrium of forces on the marine drilling riser for the connected and normal drilling conditions, the governing partial differential equations in terms of independent variables z (depth) and t (time) are derived. Subsequently, the Runge Kutta method and Finite Difference Method are employed for solving the partial differential equations arising from the analytical model. The proposed analytical approach is successfully validated with respect to the experimental results from the literature. From the dynamic analysis results of the proposed analytical approach, the critical design parameters peak displacements, upper and lower flex joint rotations and von Mises stresses of marine drilling risers are determined. An extensive parametric study is conducted to explore the effects of top tension, drilling depth, ocean current speed and platform drift on the critical design parameters of the marine drilling riser. Thereafter, incremental dynamic analysis is performed to derive the fragility functions of shallow water and deep-water marine drilling risers under ocean environmental loading. The proposed methodology can also be adopted for downtime estimation of marine drilling risers incorporating the ranges of uncertainties associated with the ocean environment, especially at deep and ultra-deepwater.Keywords: drilling riser, marine, analytical model, fragility
Procedia PDF Downloads 14612368 Impact Evaluation and Technical Efficiency in Ethiopia: Correcting for Selectivity Bias in Stochastic Frontier Analysis
Authors: Tefera Kebede Leyu
Abstract:
The purpose of this study was to estimate the impact of LIVES project participation on the level of technical efficiency of farm households in three regions of Ethiopia. We used household-level data gathered by IRLI between February and April 2014 for the year 2013(retroactive). Data on 1,905 (754 intervention and 1, 151 control groups) sample households were analyzed using STATA software package version 14. Efforts were made to combine stochastic frontier modeling with impact evaluation methodology using the Heckman (1979) two-stage model to deal with possible selectivity bias arising from unobservable characteristics in the stochastic frontier model. Results indicate that farmers in the two groups are not efficient and operate below their potential frontiers i.e., there is a potential to increase crop productivity through efficiency improvements in both groups. In addition, the empirical results revealed selection bias in both groups of farmers confirming the justification for the use of selection bias corrected stochastic frontier model. It was also found that intervention farmers achieved higher technical efficiency scores than the control group of farmers. Furthermore, the selectivity bias-corrected model showed a different technical efficiency score for the intervention farmers while it more or less remained the same for that of control group farmers. However, the control group of farmers shows a higher dispersion as measured by the coefficient of variation compared to the intervention counterparts. Among the explanatory variables, the study found that farmer’s age (proxy to farm experience), land certification, frequency of visit to improved seed center, farmer’s education and row planting are important contributing factors for participation decisions and hence technical efficiency of farmers in the study areas. We recommend that policies targeting the design of development intervention programs in the agricultural sector focus more on providing farmers with on-farm visits by extension workers, provision of credit services, establishment of farmers’ training centers and adoption of modern farm technologies. Finally, we recommend further research to deal with this kind of methodological framework using a panel data set to test whether technical efficiency starts to increase or decrease with the length of time that farmers participate in development programs.Keywords: impact evaluation, efficiency analysis and selection bias, stochastic frontier model, Heckman-two step
Procedia PDF Downloads 75