Search results for: mathematical equations.
304 Detection of Some Drugs of Abuse from Fingerprints Using Liquid Chromatography-Mass Spectrometry
Authors: Ragaa T. Darwish, Maha A. Demellawy, Haidy M. Megahed, Doreen N. Younan, Wael S. Kholeif
Abstract:
The testing of drug abuse is authentic in order to affirm the misuse of drugs. Several analytical approaches have been developed for the detection of drugs of abuse in pharmaceutical and common biological samples, but few methodologies have been created to identify them from fingerprints. Liquid Chromatography-Mass Spectrometry (LC-MS) plays a major role in this field. The current study aimed at assessing the possibility of detection of some drugs of abuse (tramadol, clonazepam, and phenobarbital) from fingerprints using LC-MS in drug abusers. The aim was extended in order to assess the possibility of detection of the above-mentioned drugs in fingerprints of drug handlers till three days of handling the drugs. The study was conducted on randomly selected adult individuals who were either drug abusers seeking treatment at centers of drug dependence in Alexandria, Egypt or normal volunteers who were asked to handle the different studied drugs (drug handlers). An informed consent was obtained from all individuals. Participants were classified into 3 groups; control group that consisted of 50 normal individuals (neither abusing nor handling drugs), drug abuser group that consisted of 30 individuals who abused tramadol, clonazepam or phenobarbital (10 individuals for each drug) and drug handler group that consisted of 50 individuals who were touching either the powder of drugs of abuse: tramadol, clonazepam or phenobarbital (10 individuals for each drug) or the powder of the control substances which were of similar appearance (white powder) and that might be used in the adulteration of drugs of abuse: acetyl salicylic acid and acetaminophen (10 individuals for each drug). Samples were taken from the handler individuals for three consecutive days for the same individual. The diagnosis of drug abusers was based on the current Diagnostic and Statistical Manual of Mental disorders (DSM-V) and urine screening tests using immunoassay technique. Preliminary drug screening tests of urine samples were also done for drug handlers and the control groups to indicate the presence or absence of the studied drugs of abuse. Fingerprints of all participants were then taken on a filter paper previously soaked with methanol to be analyzed by LC-MS using SCIEX Triple Quad or QTRAP 5500 System. The concentration of drugs in each sample was calculated using the regression equations between concentration in ng/ml and peak area of each reference standard. All fingerprint samples from drug abusers showed positive results with LC-MS for the tested drugs, while all samples from the control individuals showed negative results. A significant difference was noted between the concentration of the drugs and the duration of abuse. Tramadol, clonazepam, and phenobarbital were also successfully detected from fingerprints of drug handlers till 3 days of handling the drugs. The mean concentration of the chosen drugs of abuse among the handlers group decreased when the days of samples intake increased.Keywords: drugs of abuse, fingerprints, liquid chromatography–mass spectrometry, tramadol
Procedia PDF Downloads 119303 Investigation of Permeate Flux through DCMD Module by Inserting S-Ribs Carbon-Fiber Promoters with Ascending and Descending Hydraulic Diameters
Authors: Chii-Dong Ho, Jian-Har Chen
Abstract:
The decline in permeate flux across membrane modules is attributed to the increase in temperature polarization resistance in flat-plate Direct Contact Membrane Distillation (DCMD) modules for pure water productivity. Researchers have discovered that this effect can be diminished by embedding turbulence promoters, which augment turbulence intensity at the cost of increased power consumption, thereby improving vapor permeate flux. The device performance of DCMD modules for permeate flux was further enhanced by shrinking the hydraulic diameters of inserted S-ribs carbon-fiber promoters as well as considering the energy consumption increment. The mass-balance formulation, based on the resistance-in-series model by energy conservation in one-dimensional governing equations, was developed theoretically and conducted experimentally on a flat-plate polytetrafluoroethylene/polypropylene (PTFE/PP) membrane module to predict permeate flux and temperature distributions. The ratio of permeate flux enhancement to energy consumption increment, as referred to an assessment on economic viewpoint and technical feasibilities, was calculated to determine the suitable design parameters for DCMD operations with the insertion of S-ribs carbon-fiber turbulence promoters. An economic analysis was also performed, weighing both permeate flux improvement and energy consumption increment on modules with promoter-filled channels by different array configurations and various hydraulic diameters of turbulence promoters. Results showed that the ratio of permeate flux improvement to energy consumption increment in descending hydraulic-diameter modules is higher than in uniform hydraulic-diameter modules. The fabrication details of the DCMD module filaments implementing the S-ribs carbon-fiber filaments and the schematic configuration of the flat-plate DCMD experimental setup with presenting acrylic plates as external walls were demonstrated in the present study. The S-ribs carbon fibers perform as turbulence promoters incorporated into the artificial hot saline feed stream, which was prepared by adding inorganic salts (NaCl) to distilled water. Theoretical predictions and experimental results exhibited a great accomplishment to considerably achieve permeate flux enhancement, such as the new design of the DCMD module with inserting S-ribs carbon-fiber promoters. Additionally, the Nusselt number for the water vapor transferring membrane module with inserted S-ribs carbon-fiber promoters was generalized into a simplified expression to predict the heat transfer coefficient and permeate flux as well.Keywords: permeate flux, Nusselt number, DCMD module, temperature polarization, hydraulic diameters
Procedia PDF Downloads 8302 Comparative Analysis of in vitro Release profile for Escitalopram and Escitalopram Loaded Nanoparticles
Authors: Rashi Rajput, Manisha Singh
Abstract:
Escitalopram oxalate (ETP), an FDA approved antidepressant drug from the category of SSRI (selective serotonin reuptake inhibitor) and is used in treatment of general anxiety disorder (GAD), major depressive disorder (MDD).When taken orally, it is metabolized to S-demethylcitalopram (S-DCT) and S-didemethylcitalopram (S-DDCT) in the liver with the help of enzymes CYP2C19, CYP3A4 and CYP2D6. Hence, causing side effects such as dizziness, fast or irregular heartbeat, headache, nausea etc. Therefore, targeted and sustained drug delivery will be a helpful tool for increasing its efficacy and reducing side effects. The present study is designed for formulating mucoadhesive nanoparticle formulation for the same Escitalopram loaded polymeric nanoparticles were prepared by ionic gelation method and characterization of the optimised formulation was done by zeta average particle size (93.63nm), zeta potential (-1.89mV), TEM (range of 60nm to 115nm) analysis also confirms nanometric size range of the drug loaded nanoparticles along with polydispersibility index of 0.117. In this research, we have studied the in vitro drug release profile for ETP nanoparticles, through a semi permeable dialysis membrane. The three important characteristics affecting the drug release behaviour were – particle size, ionic strength and morphology of the optimised nanoparticles. The data showed that on increasing the particle size of the drug loaded nanoparticles, the initial burst was reduced which was comparatively higher in drug. Whereas, the formulation with 1mg/ml chitosan in 1.5mg/ml tripolyphosphate solution showed steady release over the entire period of drug release. Then this data was further validated through mathematical modelling to establish the mechanism of drug release kinetics, which showed a typical linear diffusion profile in optimised ETP loaded nanoparticles.Keywords: ionic gelation, mucoadhesive nanoparticle, semi-permeable dialysis membrane, zeta potential
Procedia PDF Downloads 294301 Application of Mathematical Models for Conducting Long-Term Metal Fume Exposure Assessments for Workers in a Shipbuilding Factory
Authors: Shu-Yu Chung, Ying-Fang Wang, Shih-Min Wang
Abstract:
To conduct long-term exposure assessments are important for workers exposed to chemicals with chronic effects. However, it usually encounters with several constrains, including cost, workers' willingness, and interference to work practice, etc., leading to inadequate long-term exposure data in the real world. In this study, an integrated approach was developed for conducting long-term exposure assessment for welding workers in a shipbuilding factory. A laboratory study was conducted to yield the fume generation rates under various operating conditions. The results and the measured environmental conditions were applied to the near field/far field (NF/FF) model for predicting long term fume exposures via the Monte Carlo simulation. Then, the predicted long-term concentrations were used to determine the prior distribution in Bayesian decision analysis (BDA). Finally, the resultant posterior distributions were used to assess the long-term exposure and serve as basis for initiating control strategies for shipbuilding workers. Results show that the NF/FF model was a suitable for predicting the exposures of metal contents containing in welding fume. The resultant posterior distributions could effectively assess the long-term exposures of shipbuilding welders. Welders' long-term Fe, Mn and Pb exposures were found with high possibilities to exceed the action level indicating preventive measures should be taken for reducing welders' exposures immediately. Though the resultant posterior distribution can only be regarded as the best solution based on the currently available predicting and monitoring data, the proposed integrated approach can be regarded as a possible solution for conducting long term exposure assessment in the field.Keywords: Bayesian decision analysis, exposure assessment, near field and far field model, shipbuilding industry, welding fume
Procedia PDF Downloads 140300 Modelling of Meandering River Dynamics in Colombia: A Case Study of the Magdalena River
Authors: Laura Isabel Guarin, Juliana Vargas, Philippe Chang
Abstract:
The analysis and study of Open Channel flow dynamics for River applications has been based on flow modelling using discreet numerical models based on hydrodynamic equations. The overall spatial characteristics of rivers, i.e. its length to depth to width ratio generally allows one to correctly disregard processes occurring in the vertical or transverse dimensions thus imposing hydrostatic pressure conditions and considering solely a 1D flow model along the river length. Through a calibration process an accurate flow model may thus be developed allowing for channel study and extrapolation of various scenarios. The Magdalena River in Colombia is a large river basin draining the country from South to North with 1550 km with 0.0024 average slope and 275 average width across. The river displays high water level fluctuation and is characterized by a series of meanders. The city of La Dorada has been affected over the years by serious flooding in the rainy and dry seasons. As the meander is evolving at a steady pace repeated flooding has endangered a number of neighborhoods. This study has been undertaken in pro of correctly model flow characteristics of the river in this region in order to evaluate various scenarios and provide decision makers with erosion control measures options and a forecasting tool. Two field campaigns have been completed over the dry and rainy seasons including extensive topographical and channel survey using Topcon GR5 DGPS and River Surveyor ADCP. Also in order to characterize the erosion process occurring through the meander, extensive suspended and river bed samples were retrieved as well as soil perforation over the banks. Hence based on DEM ground digital mapping survey and field data a 2DH flow model was prepared using the Iber freeware based on the finite volume method in a non-structured mesh environment. The calibration process was carried out comparing available historical data of nearby hydrologic gauging station. Although the model was able to effectively predict overall flow processes in the region, its spatial characteristics and limitations related to pressure conditions did not allow for an accurate representation of erosion processes occurring over specific bank areas and dwellings. As such a significant helical flow has been observed through the meander. Furthermore, the rapidly changing channel cross section as a consequence of severe erosion has hindered the model’s ability to provide decision makers with a valid up to date planning tool.Keywords: erosion, finite volume method, flow dynamics, flow modelling, meander
Procedia PDF Downloads 319299 A Proper Continuum-Based Reformulation of Current Problems in Finite Strain Plasticity
Authors: Ladislav Écsi, Roland Jančo
Abstract:
Contemporary multiplicative plasticity models assume that the body's intermediate configuration consists of an assembly of locally unloaded neighbourhoods of material particles that cannot be reassembled together to give the overall stress-free intermediate configuration since the neighbourhoods are not necessarily compatible with each other. As a result, the plastic deformation gradient, an inelastic component in the multiplicative split of the deformation gradient, cannot be integrated, and the material particle moves from the initial configuration to the intermediate configuration without a position vector and a plastic displacement field when plastic flow occurs. Such behaviour is incompatible with the continuum theory and the continuum physics of elastoplastic deformations, and the related material models can hardly be denoted as truly continuum-based. The paper presents a proper continuum-based reformulation of current problems in finite strain plasticity. It will be shown that the incompatible neighbourhoods in real material are modelled by the product of the plastic multiplier and the yield surface normal when the plastic flow is defined in the current configuration. The incompatible plastic factor can also model the neighbourhoods as the solution of the system of differential equations whose coefficient matrix is the above product when the plastic flow is defined in the intermediate configuration. The incompatible tensors replace the compatible spatial plastic velocity gradient in the former case or the compatible plastic deformation gradient in the latter case in the definition of the plastic flow rule. They act as local imperfections but have the same position vector as the compatible plastic velocity gradient or the compatible plastic deformation gradient in the definitions of the related plastic flow rules. The unstressed intermediate configuration, the unloaded configuration after the plastic flow, where the residual stresses have been removed, can always be calculated by integrating either the compatible plastic velocity gradient or the compatible plastic deformation gradient. However, the corresponding plastic displacement field becomes permanent with both elastic and plastic components. The residual strains and stresses originate from the difference between the compatible plastic/permanent displacement field gradient and the prescribed incompatible second-order tensor characterizing the plastic flow in the definition of the plastic flow rule, which becomes an assignment statement rather than an equilibrium equation. The above also means that the elastic and plastic factors in the multiplicative split of the deformation gradient are, in reality, gradients and that there is no problem with the continuum physics of elastoplastic deformations. The formulation is demonstrated in a numerical example using the regularized Mooney-Rivlin material model and modified equilibrium statements where the intermediate configuration is calculated, whose analysis results are compared with the identical material model using the current equilibrium statements. The advantages and disadvantages of each formulation, including their relationship with multiplicative plasticity, are also discussed.Keywords: finite strain plasticity, continuum formulation, regularized Mooney-Rivlin material model, compatibility
Procedia PDF Downloads 123298 Numerical Method for Productivity Prediction of Water-Producing Gas Well with Complex 3D Fractures: Case Study of Xujiahe Gas Well in Sichuan Basin
Authors: Hong Li, Haiyang Yu, Shiqing Cheng, Nai Cao, Zhiliang Shi
Abstract:
Unconventional resources have gradually become the main direction for oil and gas exploration and development. However, the productivity of gas wells, the level of water production, and the seepage law in tight fractured gas reservoirs are very different. These are the reasons why production prediction is so difficult. Firstly, a three-dimensional multi-scale fracture and multiphase mathematical model based on an embedded discrete fracture model (EDFM) is established. And the material balance method is used to calculate the water body multiple according to the production performance characteristics of water-producing gas well. This will help construct a 'virtual water body'. Based on these, this paper presents a numerical simulation process that can adapt to different production modes of gas wells. The research results show that fractures have a double-sided effect. The positive side is that it can increase the initial production capacity, but the negative side is that it can connect to the water body, which will lead to the gas production drop and the water production rise both rapidly, showing a 'scissor-like' characteristic. It is worth noting that fractures with different angles have different abilities to connect with the water body. The higher the angle of gas well development, the earlier the water maybe break through. When the reservoir is a single layer, there may be a stable production period without water before the fractures connect with the water body. Once connected, a 'scissors shape' will appear. If the reservoir has multiple layers, the gas and water will produce at the same time. The above gas-water relationship can be matched with the gas well production date of the Xujiahe gas reservoir in the Sichuan Basin. This method is used to predict the productivity of a well with hydraulic fractures in this gas reservoir, and the prediction results are in agreement with on-site production data by more than 90%. It shows that this research idea has great potential in the productivity prediction of water-producing gas wells. Early prediction results are of great significance to guide the design of development plans.Keywords: EDFM, multiphase, multilayer, water body
Procedia PDF Downloads 193297 Calculation of Fractal Dimension and Its Relation to Some Morphometric Characteristics of Iranian Landforms
Authors: Mitra Saberi, Saeideh Fakhari, Amir Karam, Ali Ahmadabadi
Abstract:
Geomorphology is the scientific study of the characteristics of form and shape of the Earth's surface. The existence of types of landforms and their variation is mainly controlled by changes in the shape and position of land and topography. In fact, the interest and application of fractal issues in geomorphology is due to the fact that many geomorphic landforms have fractal structures and their formation and transformation can be explained by mathematical relations. The purpose of this study is to identify and analyze the fractal behavior of landforms of macro geomorphologic regions of Iran, as well as studying and analyzing topographic and landform characteristics based on fractal relationships. In this study, using the Iranian digital elevation model in the form of slopes, coefficients of deposition and alluvial fan, the fractal dimensions of the curves were calculated through the box counting method. The morphometric characteristics of the landforms and their fractal dimension were then calculated for 4criteria (height, slope, profile curvature and planimetric curvature) and indices (maximum, Average, standard deviation) using ArcMap software separately. After investigating their correlation with fractal dimension, two-way regression analysis was performed and the relationship between fractal dimension and morphometric characteristics of landforms was investigated. The results show that the fractal dimension in different pixels size of 30, 90 and 200m, topographic curves of different landform units of Iran including mountain, hill, plateau, plain of Iran, from1.06in alluvial fans to1.17in The mountains are different. Generally, for all pixels of different sizes, the fractal dimension is reduced from mountain to plain. The fractal dimension with the slope criterion and the standard deviation index has the highest correlation coefficient, with the curvature of the profile and the mean index has the lowest correlation coefficient, and as the pixels become larger, the correlation coefficient between the indices and the fractal dimension decreases.Keywords: box counting method, fractal dimension, geomorphology, Iran, landform
Procedia PDF Downloads 83296 Optimal Tamping for Railway Tracks, Reducing Railway Maintenance Expenditures by the Use of Integer Programming
Authors: Rui Li, Min Wen, Kim Bang Salling
Abstract:
For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euros per kilometer per year. In order to reduce such maintenance expenditures, this paper presents a mixed 0-1 linear mathematical model designed to optimize the predictive railway tamping activities for ballast track in the planning horizon of three to four years. The objective function is to minimize the tamping machine actual costs. The approach of the research is using the simple dynamic model for modelling condition-based tamping process and the solution method for finding optimal condition-based tamping schedule. Seven technical and practical aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality recovery on the track quality after tamping operation; (5) Tamping machine operation practices (6) tamping budgets and (7) differentiating the open track from the station sections. A Danish railway track between Odense and Fredericia with 42.6 km of length is applied for a time period of three and four years in the proposed maintenance model. The generated tamping schedule is reasonable and robust. Based on the result from the Danish railway corridor, the total costs can be reduced significantly (50%) than the previous model which is based on optimizing the number of tamping. The different maintenance strategies have been discussed in the paper. The analysis from the results obtained from the model also shows a longer period of predictive tamping planning has more optimal scheduling of maintenance actions than continuous short term preventive maintenance, namely yearly condition-based planning.Keywords: integer programming, railway tamping, predictive maintenance model, preventive condition-based maintenance
Procedia PDF Downloads 442295 Dynamic Modelling of Hepatitis B Patient Using Sihar Model
Authors: Alakija Temitope Olufunmilayo, Akinyemi, Yagba Joy
Abstract:
Hepatitis is the inflammation of the liver tissue that can cause whiteness of the eyes (Jaundice), lack of appetite, vomiting, tiredness, abdominal pain, diarrhea. Hepatitis is acute if it resolves within 6 months and chronic if it last longer than 6 months. Acute hepatitis can resolve on its own, lead to chronic hepatitis or rarely result in acute liver failure. Chronic hepatitis may lead to scarring of the liver (Cirrhosis), liver failure and liver cancer. Modelling Hepatitis B may become necessary in order to reduce its spread. So, dynamic SIR model can be used. This model consists of a system of three coupled non-linear ordinary differential equation which does not have an explicit formula solution. It is an epidemiological model used to predict the dynamics of infectious disease by categorizing the population into three possible compartments. In this study, a five-compartment dynamic model of Hepatitis B disease was proposed and developed by adding control measure of sensitizing the public called awareness. All the mathematical and statistical formulation of the model, especially the general equilibrium of the model, was derived, including the nonlinear least square estimators. The initial parameters of the model were derived using nonlinear least square embedded in R code. The result study shows that the proportion of Hepatitis B patient in the study population is 1.4 per 1,000,000 populations. The estimated Hepatitis B induced death rate is 0.0108, meaning that 1.08% of the infected individuals die of the disease. The reproduction number of Hepatitis B diseases in Nigeria is 6.0, meaning that one individual can infect more than 6.0 people. The effect of sensitizing the public on the basic reproduction number is significant as the reproduction number is reduced. The study therefore recommends that programme should be designed by government and non-governmental organization to sensitize the entire Nigeria population in order to reduce cases of Hepatitis B disease among the citizens.Keywords: hepatitis B, modelling, non-linear ordinary differential equation, sihar model, sensitization
Procedia PDF Downloads 89294 Angiogenesis and Blood Flow: The Role of Blood Flow in Proliferation and Migration of Endothelial Cells
Authors: Hossein Bazmara, Kaamran Raahemifar, Mostafa Sefidgar, Madjid Soltani
Abstract:
Angiogenesis is formation of new blood vessels from existing vessels. Due to flow of blood in vessels, during angiogenesis, blood flow plays an important role in regulating the angiogenesis process. Multiple mathematical models of angiogenesis have been proposed to simulate the formation of the complicated network of capillaries around a tumor. In this work, a multi-scale model of angiogenesis is developed to show the effect of blood flow on capillaries and network formation. This model spans multiple temporal and spatial scales, i.e. intracellular (molecular), cellular, and extracellular (tissue) scales. In intracellular or molecular scale, the signaling cascade of endothelial cells is obtained. Two main stages in development of a vessel are considered. In the first stage, single sprouts are extended toward the tumor. In this stage, the main regulator of endothelial cells behavior is the signals from extracellular matrix. After anastomosis and formation of closed loops, blood flow starts in the capillaries. In this stage, blood flow induced signals regulate endothelial cells behaviors. In cellular scale, growth and migration of endothelial cells is modeled with a discrete lattice Monte Carlo method called cellular Pott's model (CPM). In extracellular (tissue) scale, diffusion of tumor angiogenic factors in the extracellular matrix, formation of closed loops (anastomosis), and shear stress induced by blood flow is considered. The model is able to simulate the formation of a closed loop and its extension. The results are validated against experimental data. The results show that, without blood flow, the capillaries are not able to maintain their integrity.Keywords: angiogenesis, endothelial cells, multi-scale model, cellular Pott's model, signaling cascade
Procedia PDF Downloads 425293 Development of Ready Reckoner Charts for Easy, Convenient, and Widespread Use of Horrock’s Apparatus by Field Level Health Functionaries in India
Authors: Gumashta Raghvendra, Gumashta Jyotsna
Abstract:
Aim and Objective of Study : The use of Horrock’s Apparatus by health care worker requires onsite mathematical calculations for estimation of ‘volume of water’ and ‘amount of bleaching powder’ necessary as per the serial number of first cup showing blue coloration after adding freshly prepared starch-iodide indicator solution. In view of the difficulties of two simultaneous calculations required to be done, the use of Horrock’s Apparatus is not routinely done by health care workers because it is impractical and inconvenient Material and Methods: Arbitrary use of bleaching powder in wells results in hyper-chlorination or hypo-chlorination of well defying the purpose of adequate chlorination or non-usage of well water due to hyper-chlorination. Keeping this in mind two nomograms have been developed, one to assess the volume of well using depth and diameter of well and the other to know the quantity of bleaching powder to b added using the number of the cup of Horrock’s apparatus which shows the colour indication. Result & Conclusion: Out of thus developed two self-speaking interlinked easy charts, first chart will facilitate bypassing requirement of formulae ‘πr2h’ for water volume (ready reckoner table with depth of water shown on ‘X’ axis and ‘diameter of well’ on ‘Y’ axis) and second chart will facilitate bypassing requirement formulae ‘2ab/455’ (where ‘a’ is for ‘serial number of cup’ and ‘b’ is for ‘water volume’, while ready reckoner table showing ‘water volume’ shown on ‘X’ axis and ‘serial number of cup’ on ‘Y’ axis). The use of these two charts will help health care worker to immediately known, by referring the two charts, about the exact requirement of bleaching powder. Thus, developed ready reckoner charts will be easy and convenient to use for ensuring prevention of water-borne diseases occurring due to hypo-chlorination, especially in rural India and other developing countries.Keywords: apparatus, bleaching, chlorination, Horrock’s, nomogram
Procedia PDF Downloads 482292 Optimization-Based Design Improvement of Synchronizer in Transmission System for Efficient Vehicle Performance
Authors: Sanyka Banerjee, Saikat Nandi, P. K. Dan
Abstract:
Synchronizers as an integral part of gearbox is a key element in the transmission system in automotive. The performance of synchronizer affects transmission efficiency and driving comfort. Synchronizing mechanism as a major component of transmission system must be capable of preventing vibration and noise in the gears. Gear shifting efficiency improvement with an aim to achieve smooth, quick and energy efficient power transmission remains a challenge for the automotive industry. Performance of the synchronizer is dependent on the features and characteristics of its sub-components and therefore analysis of the contribution of such characteristics is necessary. An important exercise involved is to identify all such characteristics or factors which are associated with the modeling and analysis and for this purpose the literature was reviewed, rather extensively, to study the mathematical models, formulated considering such. It has been observed that certain factors are rather common across models; however, there are few factors which have specifically been selected for individual models, as reported. In order to obtain a more realistic model, an attempt here has been made to identify and assimilate practically all possible factors which may be considered in formulating the model more comprehensively. A simulation study, formulated as a block model, for such analysis has been carried out in a reliable environment like MATLAB. Lower synchronization time is desirable and hence, it has been considered here as the output factors in the simulation modeling for evaluating transmission efficiency. An improved synchronizer model requires optimized values of sub-component design parameters. A parametric optimization utilizing Taguchi’s design of experiment based response data and their analysis has been carried out for this purpose. The effectiveness of the optimized parameters for the improved synchronizer performance has been validated by the simulation study of the synchronizer block model with improved parameter values as input parameters for better transmission efficiency and driver comfort.Keywords: design of experiments, modeling, parametric optimization, simulation, synchronizer
Procedia PDF Downloads 311291 Evaluation of River Meander Geometry Using Uniform Excess Energy Theory and Effects of Climate Change on River Meandering
Authors: Youssef I. Hafez
Abstract:
Since ancient history rivers have been the fostering and favorite place for people and civilizations to live and exist along river banks. However, due to floods and droughts, especially sever conditions due to global warming and climate change, river channels are completely evolving and moving in the lateral direction changing their plan form either through straightening of curved reaches (meander cut-off) or increasing meandering curvature. The lateral shift or shrink of a river channel affects severely the river banks and the flood plain with tremendous impact on the surrounding environment. Therefore, understanding the formation and the continual processes of river channel meandering is of paramount importance. So far, in spite of the huge number of publications about river-meandering, there has not been a satisfactory theory or approach that provides a clear explanation of the formation of river meanders and the mechanics of their associated geometries. In particular two parameters are often needed to describe meander geometry. The first one is a scale parameter such as the meander arc length. The second is a shape parameter such as the maximum angle a meander path makes with the channel mean down path direction. These two parameters, if known, can determine the meander path and geometry as for example when they are incorporated in the well known sine-generated curve. In this study, a uniform excess energy theory is used to illustrate the origin and mechanics of formation of river meandering. This theory advocates that the longitudinal imbalance between the valley and channel slopes (with the former is greater than the second) leads to formation of curved meander channel in order to reduce the excess energy through its expenditure as transverse energy loss. Two relations are developed based on this theory; one for the determination of river channel radius of curvature at the bend apex (shape parameter) and the other for the determination of river channel sinuosity. The sinuosity equation tested very well when applied to existing available field data. In addition, existing model data were used to develop a relation between the meander arc length and the Darcy-Weisback friction factor. Then, the meander wave length was determined from the equations of the arc length and the sinuosity. The developed equation compared well with available field data. Effects of the transverse bed slope and grain size on river channel sinuosity are addressed. In addition, the concept of maximum channel sinuosity is introduced in order to explain the changes of river channel plan form due to changes in flow discharges and sediment loads induced by global warming and climate changes.Keywords: river channel meandering, sinuosity, radius of curvature, meander arc length, uniform excess energy theory, transverse energy loss, transverse bed slope, flow discharges, sediment loads, grain size, climate change, global warming
Procedia PDF Downloads 223290 Numerical Solution of Portfolio Selecting Semi-Infinite Problem
Authors: Alina Fedossova, Jose Jorge Sierra Molina
Abstract:
SIP problems are part of non-classical optimization. There are problems in which the number of variables is finite, and the number of constraints is infinite. These are semi-infinite programming problems. Most algorithms for semi-infinite programming problems reduce the semi-infinite problem to a finite one and solve it by classical methods of linear or nonlinear programming. Typically, any of the constraints or the objective function is nonlinear, so the problem often involves nonlinear programming. An investment portfolio is a set of instruments used to reach the specific purposes of investors. The risk of the entire portfolio may be less than the risks of individual investment of portfolio. For example, we could make an investment of M euros in N shares for a specified period. Let yi> 0, the return on money invested in stock i for each dollar since the end of the period (i = 1, ..., N). The logical goal here is to determine the amount xi to be invested in stock i, i = 1, ..., N, such that we maximize the period at the end of ytx value, where x = (x1, ..., xn) and y = (y1, ..., yn). For us the optimal portfolio means the best portfolio in the ratio "risk-return" to the investor portfolio that meets your goals and risk ways. Therefore, investment goals and risk appetite are the factors that influence the choice of appropriate portfolio of assets. The investment returns are uncertain. Thus we have a semi-infinite programming problem. We solve a semi-infinite optimization problem of portfolio selection using the outer approximations methods. This approach can be considered as a developed Eaves-Zangwill method applying the multi-start technique in all of the iterations for the search of relevant constraints' parameters. The stochastic outer approximations method, successfully applied previously for robotics problems, Chebyshev approximation problems, air pollution and others, is based on the optimal criteria of quasi-optimal functions. As a result we obtain mathematical model and the optimal investment portfolio when yields are not clear from the beginning. Finally, we apply this algorithm to a specific case of a Colombian bank.Keywords: outer approximation methods, portfolio problem, semi-infinite programming, numerial solution
Procedia PDF Downloads 309289 A Perspective on Teaching Mathematical Concepts to Freshman Economics Students Using 3D-Visualisations
Authors: Muhammad Saqib Manzoor, Camille Dickson-Deane, Prashan Karunaratne
Abstract:
Cobb-Douglas production (utility) function is a fundamental function widely used in economics teaching and research. The key reason is the function's characteristics to describe the actual production using inputs like labour and capital. The characteristics of the function like returns to scale, marginal, and diminishing marginal productivities are covered in the introductory units in both microeconomics and macroeconomics with a 2-dimensional static visualisation of the function. However, less insight is provided regarding three-dimensional surface, changes in the curvature properties due to returns to scale, the linkage of the short-run production function with its long-run counterpart and marginal productivities, the level curves, and the constraint optimisation. Since (freshman) learners have diverse prior knowledge and cognitive skills, the existing “one size fits all” approach is not very helpful. The aim of this study is to bridge this gap by introducing technological intervention with interactive animations of the three-dimensional surface and sequential unveiling of the characteristics mentioned above using Python software. A small classroom intervention has helped students enhance their analytical and visualisation skills towards active and authentic learning of this topic. However, to authenticate the strength of our approach, a quasi-Delphi study will be conducted to ask domain-specific experts, “What value to the learning process in economics is there using a 2-dimensional static visualisation compared to using a 3-dimensional dynamic visualisation?’ Here three perspectives of the intervention were reviewed by a panel comprising of novice students, experienced students, novice instructors, and experienced instructors in an effort to determine the learnings from each type of visualisations within a specific domain of knowledge. The value of this approach is key to suggesting different pedagogical methods which can enhance learning outcomes.Keywords: cobb-douglas production function, quasi-Delphi method, effective teaching and learning, 3D-visualisations
Procedia PDF Downloads 145288 Variation of Manning’s Coefficient in a Meandering Channel with Emergent Vegetation Cover
Authors: Spandan Sahu, Amiya Kumar Pati, Kishanjit Kumar Khatua
Abstract:
Vegetation plays a major role in deciding the flow parameters in an open channel. It enhances the aesthetic view of the revetments. The major types of vegetation in river typically comprises of herbs, grasses, weeds, trees, etc. The vegetation in an open channel usually consists of aquatic plants with complete submergence, partial submergence, floating plants. The presence of vegetative plants can have both benefits and problems. The major benefits of aquatic plants are they reduce the soil erosion, which provides the water with a free surface to move on without hindrance. The obvious problems are they retard the flow of water and reduce the hydraulic capacity of the channel. The degree to which the flow parameters are affected depends upon the density of the vegetation, degree of submergence, pattern of vegetation, vegetation species. Vegetation in open channel tends to provide resistance to flow, which in turn provides a background to study the varying trends in flow parameters having vegetative growth in the channel surface. In this paper, an experiment has been conducted on a meandering channel having sinuosity of 1.33 with rigid vegetation cover to investigate the effect on flow parameters, variation of manning’s n with degree of the denseness of vegetation, vegetation pattern and submergence criteria. The measurements have been carried out in four different cross-sections two on trough portion of the meanders, two on the crest portion. In this study, the analytical solution of Shiono and knight (SKM) for lateral distributions of depth-averaged velocity and bed shear stress have been taken into account. Dimensionless eddy viscosity and bed friction have been incorporated to modify the SKM to provide more accurate results. A mathematical model has been formulated to have a comparative analysis with the results obtained from Shiono-Knight Method.Keywords: bed friction, depth averaged velocity, eddy viscosity, SKM
Procedia PDF Downloads 137287 Interfacial Instability and Mixing Behavior between Two Liquid Layers Bounded in Finite Volumes
Authors: Lei Li, Ming M. Chai, Xiao X. Lu, Jia W. Wang
Abstract:
The mixing process of two liquid layers in a cylindrical container includes the upper liquid with higher density rushing into the lower liquid with lighter density, the lower liquid rising into the upper liquid, meanwhile the two liquid layers having interactions with each other, forming vortices, spreading or dispersing in others, entraining or mixing with others. It is a complex process constituted of flow instability, turbulent mixing and other multiscale physical phenomena and having a fast evolution velocity. In order to explore the mechanism of the process and make further investigations, some experiments about the interfacial instability and mixing behavior between two liquid layers bounded in different volumes are carried out, applying the planar laser induced fluorescence (PLIF) and the high speed camera (HSC) techniques. According to the results, the evolution of interfacial instability between immiscible liquid develops faster than theoretical rate given by the Rayleigh-Taylor Instability (RTI) theory. It is reasonable to conjecture that some mechanisms except the RTI play key roles in the mixture process of two liquid layers. From the results, it is shown that the invading velocity of the upper liquid into the lower liquid does not depend on the upper liquid's volume (height). Comparing to the cases that the upper and lower containers are of identical diameter, in the case that the lower liquid volume increases to larger geometric space, the upper liquid spreads and expands into the lower liquid more quickly during the evolution of interfacial instability, indicating that the container wall has important influence on the mixing process. In the experiments of miscible liquid layers’ mixing, the diffusion time and pattern of the liquid interfacial mixing also does not depend on the upper liquid's volumes, and when the lower liquid volume increases to larger geometric space, the action of the bounded wall on the liquid falling and rising flow will decrease, and the liquid interfacial mixing effects will also attenuate. Therefore, it is also concluded that the volume weight of upper heavier liquid is not the reason of the fast interfacial instability evolution between the two liquid layers and the bounded wall action is limited to the unstable and mixing flow. The numerical simulations of the immiscible liquid layers’ interfacial instability flow using the VOF method show the typical flow pattern agree with the experiments. However the calculated instability development is much slower than the experimental measurement. The numerical simulation of the miscible liquids’ mixing, which applying Fick’s diffusion law to the components’ transport equation, shows a much faster mixing rate than the experiments on the liquids’ interface at the initial stage. It can be presumed that the interfacial tension plays an important role in the interfacial instability between the two liquid layers bounded in finite volume.Keywords: interfacial instability and mixing, two liquid layers, Planar Laser Induced Fluorescence (PLIF), High Speed Camera (HSC), interfacial energy and tension, Cahn-Hilliard Navier-Stokes (CHNS) equations
Procedia PDF Downloads 248286 Quality Control of Distinct Cements by IR Spectroscopy: First, insights into Perspectives and Opportunities
Authors: Tobias Bader, Joerg Rickert
Abstract:
One key factor in achieving net zero emissions along the cement and concrete value chain in Europe by 2050 is the use of distinct constituents to produce improved and advanced cements. These cements will contain e.g. calcined clays, recycled concrete fines that are chemically similar as well as X-ray amorphous and therefore difficult to distinguish. This leads to enhanced requirements on the analytical methods for quality control regarding accuracy as well as reproducibility due to the more complex cement composition. With the methods currently provided for in the European standards, it will be a challenge to ensure reliable analyses of the composition of the cements. In an ongoing research project, infrared (IR) spectroscopy in combination with mathematical tools (chemometrics) is going to be evaluated as an additional analytical method with fast and low preparation effort for the characterization of silicate-based cement constituents. The resulting comprehensive database should facilitate determination of the composition of new cements. First results confirmed the applicability of near-infrared IR for the characterization of traditional silicate-based cement constituents (e.g. clinker, granulated blast furnace slag) and modern X-ray amorphous constituents (e.g. calcined clay, recycled concrete fines) as well as different sulfate species (e.g. gypsum, hemihydrate, anhydrite). A multivariant calibration model based on numerous calibration mixtures is in preparation. The final analytical concept to be developed will form the basis for establishing IR spectroscopy as a rapid analytical method for characterizing material flows of known and unknown inorganic substances according to their material properties online and offline. The underlying project was funded by the Federal Institute for Research on Building, Urban Affairs and Spatial Development on behalf of the Federal Ministry of Housing, Urban Development and Building with funds from the ‘Zukunft Bau’ research programme.Keywords: cement, infrared spectroscopy, quality control, X-ray amorphous
Procedia PDF Downloads 39285 Influence of Flexible Plate's Contour on Dynamic Behavior of High Speed Flexible Coupling of Combat Aircraft
Authors: Dineshsingh Thakur, S. Nagesh, J. Basha
Abstract:
A lightweight High Speed Flexible Coupling (HSFC) is used to connect the Engine Gear Box (EGB) with an Accessory Gear Box (AGB) of the combat aircraft. The HSFC transmits the power at high speeds ranging from 10000 to 18000 rpm from the EGB to AGB. The HSFC is also accommodates larger misalignments resulting from thermal expansion of the aircraft engine and mounting arrangement. The HSFC has the series of metallic contoured annular thin cross-sectioned flexible plates to accommodate the misalignments. The flexible plates are accommodating the misalignment by the elastic material flexure. As the HSFC operates at higher speed, the flexural and axial resonance frequencies are to be kept away from the operating speed and proper prediction is required to prevent failure in the transmission line of a single engine fighter aircraft. To study the influence of flexible plate’s contour on the lateral critical speed (LCS) of HSFC, a mathematical model of HSFC as a elven rotor system is developed. The flexible plate being the bending member of the system, its bending stiffness which results from the contoured governs the LCS. Using transfer matrix method, Influence of various flexible plate contours on critical speed is analyzed. In the above analysis, the support bearing flexibility on critical speed prediction is also considered. Based on the study, a model is built with the optimum contour of flexible plate, for validation by experimental modal analysis. A good correlation between the theoretical prediction and model behavior is observed. From the study, it is found that the flexible plate’s contour is playing vital role in modification of system’s dynamic behavior and the present model can be extended for the development of similar type of flexible couplings for its computational simplicity and reliability.Keywords: flexible rotor, critical speed, experimental modal analysis, high speed flexible coupling (HSFC), misalignment
Procedia PDF Downloads 215284 Reverse Logistics End of Life Products Acquisition and Sorting
Authors: Badli Shah Mohd Yusoff, Khairur Rijal Jamaludin, Rozetta Dollah
Abstract:
The emerging of reverse logistics and product recovery management is an important concept in reconciling economic and environmental objectives through recapturing values of the end of life product returns. End of life products contains valuable modules, parts, residues and materials that can create value if recovered efficiently. The main objective of this study is to explore and develop a model to recover as much of the economic value as reasonably possible to find the optimality of return acquisition and sorting to meet demand and maximize profits over time. In this study, the benefits that can be obtained for remanufacturer is to develop demand forecasting of used products in the future with uncertainty of returns and quality of products. Formulated based on a generic disassembly tree, the proposed model focused on three reverse logistics activity, namely refurbish, remanufacture and disposal incorporating all plausible means quality levels of the returns. While stricter sorting policy, constitute to the decrease amount of products to be refurbished or remanufactured and increases the level of discarded products. Numerical experiments carried out to investigate the characteristics and behaviour of the proposed model with mathematical programming model using Lingo 16.0 for medium-term planning of return acquisition, disassembly (refurbish or remanufacture) and disposal activities. Moreover, the model seeks an analysis a number of decisions relating to trade off management system to maximize revenue from the collection of use products reverse logistics services through refurbish and remanufacture recovery options. The results showed that full utilization in the sorting process leads the system to obtain less quantity from acquisition with minimal overall cost. Further, sensitivity analysis provides a range of possible scenarios to consider in optimizing the overall cost of refurbished and remanufactured products.Keywords: core acquisition, end of life, reverse logistics, quality uncertainty
Procedia PDF Downloads 302283 Study of University Course Scheduling for Crowd Gathering Risk Prevention and Control in the Context of Routine Epidemic Prevention
Authors: Yuzhen Hu, Sirui Wang
Abstract:
As a training base for intellectual talents, universities have a large number of students. Teaching is a primary activity in universities, and during the teaching process, a large number of people gather both inside and outside the teaching buildings, posing a strong risk of close contact. The class schedule is the fundamental basis for teaching activities in universities and plays a crucial role in the management of teaching order. Different class schedules can lead to varying degrees of indoor gatherings and trajectories of class attendees. In recent years, highly contagious diseases have frequently occurred worldwide, and how to reduce the risk of infection has always been a hot issue related to public safety. "Reducing gatherings" is one of the core measures in epidemic prevention and control, and it can be controlled through scientific scheduling in specific environments. Therefore, the scientific prevention and control goal can be achieved by considering the reduction of the risk of excessive gathering of people during the course schedule arrangement. Firstly, we address the issue of personnel gathering in various pathways on campus, with the goal of minimizing congestion and maximizing teaching effectiveness, establishing a nonlinear mathematical model. Next, we design an improved genetic algorithm, incorporating real-time evacuation operations based on tracking search and multidimensional positive gradient cross-mutation operations, considering the characteristics of outdoor crowd evacuation. Finally, we apply undergraduate course data from a university in Harbin to conduct a case study. It compares and analyzes the effects of algorithm improvement and optimization of gathering situations and explores the impact of path blocking on the degree of gathering of individuals on other pathways.Keywords: the university timetabling problem, risk prevention, genetic algorithm, risk control
Procedia PDF Downloads 88282 Thorium Extraction with Cyanex272 Coated Magnetic Nanoparticles
Authors: Afshin Shahbazi, Hadi Shadi Naghadeh, Ahmad Khodadadi Darban
Abstract:
In the Magnetically Assisted Chemical Separation (MACS) process, tiny ferromagnetic particles coated with solvent extractant are used to selectively separate radionuclides and hazardous metals from aqueous waste streams. The contaminant-loaded particles are then recovered from the waste solutions using a magnetic field. In the present study, Cyanex272 or C272 (bis (2,4,4-trimethylpentyl) phosphinic acid) coated magnetic particles are being evaluated for the possible application in the extraction of Thorium (IV) from nuclear waste streams. The uptake behaviour of Th(IV) from nitric acid solutions was investigated by batch studies. Adsorption of Thorium (IV) from aqueous solution onto adsorbent was investigated in a batch system. Adsorption isotherm and adsorption kinetic studies of Thorium (IV) onto nanoparticles coated Cyanex272 were carried out in a batch system. The factors influencing Thorium (IV) adsorption were investigated and described in detail, as a function of the parameters such as initial pH value, contact time, adsorbent mass, and initial Thorium (IV) concentration. Magnetically Assisted Chemical Separation (MACS) process adsorbent showed best results for the fast adsorption of Th (IV) from aqueous solution at aqueous phase acidity value of 0.5 molar. In addition, more than 80% of Th (IV) was removed within the first 2 hours, and the time required to achieve the adsorption equilibrium was only 140 minutes. Langmuir and Frendlich adsorption models were used for the mathematical description of the adsorption equilibrium. Equilibrium data agreed very well with the Langmuir model, with a maximum adsorption capacity of 48 mg.g-1. Adsorption kinetics data were tested using pseudo-first-order, pseudo-second-order and intra-particle diffusion models. Kinetic studies showed that the adsorption followed a pseudo-second-order kinetic model, indicating that the chemical adsorption was the rate-limiting step.Keywords: Thorium (IV) adsorption, MACS process, magnetic nanoparticles, Cyanex272
Procedia PDF Downloads 338281 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 114280 Arabic Lexicon Learning to Analyze Sentiment in Microblogs
Authors: Mahmoud B. Rokaya
Abstract:
The study of opinion mining and sentiment analysis includes analysis of opinions, sentiments, evaluations, attitudes, and emotions. The rapid growth of social media, social networks, reviews, forum discussions, microblogs, and Twitter, leads to a parallel growth in the field of sentiment analysis. The field of sentiment analysis tries to develop effective tools to make it possible to capture the trends of people. There are two approaches in the field, lexicon-based and corpus-based methods. A lexicon-based method uses a sentiment lexicon which includes sentiment words and phrases with assigned numeric scores. These scores reveal if sentiment phrases are positive or negative, their intensity, and/or their emotional orientations. Creation of manual lexicons is hard. This brings the need for adaptive automated methods for generating a lexicon. The proposed method generates dynamic lexicons based on the corpus and then classifies text using these lexicons. In the proposed method, different approaches are combined to generate lexicons from text. The proposed method classifies the tweets into 5 classes instead of +ve or –ve classes. The sentiment classification problem is written as an optimization problem, finding optimum sentiment lexicons are the goal of the optimization process. The solution was produced based on mathematical programming approaches to find the best lexicon to classify texts. A genetic algorithm was written to find the optimal lexicon. Then, extraction of a meta-level feature was done based on the optimal lexicon. The experiments were conducted on several datasets. Results, in terms of accuracy, recall and F measure, outperformed the state-of-the-art methods proposed in the literature in some of the datasets. A better understanding of the Arabic language and culture of Arab Twitter users and sentiment orientation of words in different contexts can be achieved based on the sentiment lexicons proposed by the algorithm.Keywords: social media, Twitter sentiment, sentiment analysis, lexicon, genetic algorithm, evolutionary computation
Procedia PDF Downloads 188279 Development of Green Cement, Based on Partial Replacement of Clinker with Limestone Powder
Authors: Yaniv Knop, Alva Peled
Abstract:
Over the past few years there has been a growing interest in the development of Portland Composite Cement, by partial replacement of the clinker with mineral additives. The motivations to reduce the clinker content are threefold: (1) Ecological - due to lower emission of CO2 to the atmosphere; (2) Economical - due to cost reduction; and (3) Scientific\Technology – improvement of performances. Among the mineral additives being used and investigated, limestone is one of the most attractive, as it is considered natural, available, and with low cost. The goal of the research is to develop green cement, by partial replacement of the clinker with limestone powder while improving the performances of the cement paste. This work studied blended cements with three limestone powder particle diameters: smaller than, larger than, and similarly sized to the clinker particle. Blended cement with limestone consisting of one particle size distribution and limestone consisting of a combination of several particle sizes were studied and compared in terms of hydration rate, hydration degree, and water demand to achieve normal consistency. The performances of these systems were also compared with that of the original cement (without added limestone). It was found that the ability to replace an active material with an inert additive, while achieving improved performances, can be obtained by increasing the packing density of the cement-based particles. This may be achieved by replacing the clinker with limestone powders having a combination of several different particle size distributions. Mathematical and physical models were developed to simulate the setting history from initial to final setting time and to predict the packing density of blended cement with limestone having different sizes and various contents. Besides the effect of limestone, as inert additive, on the packing density of the blended cement, the influence of the limestone particle size on three different chemical reactions were studied; hydration of the cement, carbonation of the calcium hydroxide and the reactivity of the limestone with the hydration reaction products. The main results and developments will be presented.Keywords: packing density, hydration degree, limestone, blended cement
Procedia PDF Downloads 285278 Design of an Ultra High Frequency Rectifier for Wireless Power Systems by Using Finite-Difference Time-Domain
Authors: Felipe M. de Freitas, Ícaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende
Abstract:
There is a dispersed energy in Radio Frequencies (RF) that can be reused to power electronics circuits such as: sensors, actuators, identification devices, among other systems, without wire connections or a battery supply requirement. In this context, there are different types of energy harvesting systems, including rectennas, coil systems, graphene and new materials. A secondary step of an energy harvesting system is the rectification of the collected signal which may be carried out, for example, by the combination of one or more Schottky diodes connected in series or shunt. In the case of a rectenna-based system, for instance, the diode used must be able to receive low power signals at ultra-high frequencies. Therefore, it is required low values of series resistance, junction capacitance and potential barrier voltage. Due to this low-power condition, voltage multiplier configurations are used such as voltage doublers or modified bridge converters. Lowpass filter (LPF) at the input, DC output filter, and a resistive load are also commonly used in the rectifier design. The electronic circuits projects are commonly analyzed through simulation in SPICE (Simulation Program with Integrated Circuit Emphasis) environment. Despite the remarkable potential of SPICE-based simulators for complex circuit modeling and analysis of quasi-static electromagnetic fields interaction, i.e., at low frequency, these simulators are limited and they cannot model properly applications of microwave hybrid circuits in which there are both, lumped elements as well as distributed elements. This work proposes, therefore, the electromagnetic modelling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-high frequencies, with application in rectifiers coupled to antennas, as in energy harvesting systems, that is, in rectennas. For this purpose, the numerical method FDTD (Finite-Difference Time-Domain) is applied and SPICE computational tools are used for comparison. In the present work, initially the Ampere-Maxwell equation is applied to the equations of current density and electric field within the FDTD method and its circuital relation with the voltage drop in the modeled component for the case of lumped parameter using the FDTD (Lumped-Element Finite-Difference Time-Domain) proposed in for the passive components and the one proposed in for the diode. Next, a rectifier is built with the essential requirements for operating rectenna energy harvesting systems and the FDTD results are compared with experimental measurements.Keywords: energy harvesting system, LE-FDTD, rectenna, rectifier, wireless power systems
Procedia PDF Downloads 130277 Structural Design Optimization of Reinforced Thin-Walled Vessels under External Pressure Using Simulation and Machine Learning Classification Algorithm
Authors: Lydia Novozhilova, Vladimir Urazhdin
Abstract:
An optimization problem for reinforced thin-walled vessels under uniform external pressure is considered. The conventional approaches to optimization generally start with pre-defined geometric parameters of the vessels, and then employ analytic or numeric calculations and/or experimental testing to verify functionality, such as stability under the projected conditions. The proposed approach consists of two steps. First, the feasibility domain will be identified in the multidimensional parameter space. Every point in the feasibility domain defines a design satisfying both geometric and functional constraints. Second, an objective function defined in this domain is formulated and optimized. The broader applicability of the suggested methodology is maximized by implementing the Support Vector Machines (SVM) classification algorithm of machine learning for identification of the feasible design region. Training data for SVM classifier is obtained using the Simulation package of SOLIDWORKS®. Based on the data, the SVM algorithm produces a curvilinear boundary separating admissible and not admissible sets of design parameters with maximal margins. Then optimization of the vessel parameters in the feasibility domain is performed using the standard algorithms for the constrained optimization. As an example, optimization of a ring-stiffened closed cylindrical thin-walled vessel with semi-spherical caps under high external pressure is implemented. As a functional constraint, von Mises stress criterion is used but any other stability constraint admitting mathematical formulation can be incorporated into the proposed approach. Suggested methodology has a good potential for reducing design time for finding optimal parameters of thin-walled vessels under uniform external pressure.Keywords: design parameters, feasibility domain, von Mises stress criterion, Support Vector Machine (SVM) classifier
Procedia PDF Downloads 327276 Application of Stochastic Models on the Portuguese Population and Distortion to Workers Compensation Pensioners Experience
Authors: Nkwenti Mbelli Njah
Abstract:
This research was motivated by a project requested by AXA on the topic of pensions payable under the workers compensation (WC) line of business. There are two types of pensions: the compulsorily recoverable and the not compulsorily recoverable. A pension is compulsorily recoverable for a victim when there is less than 30% of disability and the pension amount per year is less than six times the minimal national salary. The law defines that the mathematical provisions for compulsory recoverable pensions must be calculated by applying the following bases: mortality table TD88/90 and rate of interest 5.25% (maybe with rate of management). To manage pensions which are not compulsorily recoverable is a more complex task because technical bases are not defined by law and much more complex computations are required. In particular, companies have to predict the amount of payments discounted reflecting the mortality effect for all pensioners (this task is monitored monthly in AXA). The purpose of this research was thus to develop a stochastic model for the future mortality of the worker’s compensation pensioners of both the Portuguese market workers and AXA portfolio. Not only is past mortality modeled, also projections about future mortality are made for the general population of Portugal as well as for the two portfolios mentioned earlier. The global model was split in two parts: a stochastic model for population mortality which allows for forecasts, combined with a point estimate from a portfolio mortality model obtained through three different relational models (Cox Proportional, Brass Linear and Workgroup PLT). The one-year death probabilities for ages 0-110 for the period 2013-2113 are obtained for the general population and the portfolios. These probabilities are used to compute different life table functions as well as the not compulsorily recoverable reserves for each of the models required for the pensioners, their spouses and children under 21. The results obtained are compared with the not compulsory recoverable reserves computed using the static mortality table (TD 73/77) that is currently being used by AXA, to see the impact on this reserve if AXA adopted the dynamic tables.Keywords: compulsorily recoverable, life table functions, relational models, worker’s compensation pensioners
Procedia PDF Downloads 164275 Main Control Factors of Fluid Loss in Drilling and Completion in Shunbei Oilfield by Unmanned Intervention Algorithm
Authors: Peng Zhang, Lihui Zheng, Xiangchun Wang, Xiaopan Kou
Abstract:
Quantitative research on the main control factors of lost circulation has few considerations and single data source. Using Unmanned Intervention Algorithm to find the main control factors of lost circulation adopts all measurable parameters. The degree of lost circulation is characterized by the loss rate as the objective function. Geological, engineering and fluid data are used as layers, and 27 factors such as wellhead coordinates and WOB are used as dimensions. Data classification is implemented to determine function independent variables. The mathematical equation of loss rate and 27 influencing factors is established by multiple regression method, and the undetermined coefficient method is used to solve the undetermined coefficient of the equation. Only three factors in t-test are greater than the test value 40, and the F-test value is 96.557%, indicating that the correlation of the model is good. The funnel viscosity, final shear force and drilling time were selected as the main control factors by elimination method, contribution rate method and functional method. The calculated values of the two wells used for verification differ from the actual values by -3.036m3/h and -2.374m3/h, with errors of 7.21% and 6.35%. The influence of engineering factors on the loss rate is greater than that of funnel viscosity and final shear force, and the influence of the three factors is less than that of geological factors. Quantitatively calculate the best combination of funnel viscosity, final shear force and drilling time. The minimum loss rate of lost circulation wells in Shunbei area is 10m3/h. It can be seen that man-made main control factors can only slow down the leakage, but cannot fundamentally eliminate it. This is more in line with the characteristics of karst caves and fractures in Shunbei fault solution oil and gas reservoir.Keywords: drilling and completion, drilling fluid, lost circulation, loss rate, main controlling factors, unmanned intervention algorithm
Procedia PDF Downloads 112