Search results for: equivalent linear model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18921

Search results for: equivalent linear model

18471 Graded Orientation of the Linear Polymers

Authors: Levan Nadareishvili, Roland Bakuradze, Barbara Kilosanidze, Nona Topuridze, Liana Sharashidze, Ineza Pavlenishvili

Abstract:

Some regularities of formation of a new structural state of the thermoplastic polymers-gradually oriented (stretched) state (GOS) are discussed. Transition into GOS is realized by the graded oriented stretching-by action of inhomogeneous mechanical field on the isotropic linear polymers or by zonal stretching that is implemented on a standard tensile-testing machine with using a specially designed zone stretching device (ZSD). Both technical approaches (especially zonal stretching method) allows to manage the such quantitative parameters of gradually oriented polymers as a range of change in relative elongation/orientation degree, length of this change and profile (linear, hyperbolic, parabolic, logarithmic, etc.). Uniaxial graded stretching method should be considered as an effective technological solution to create polymer materials with a predetermined gradient of physical properties.

Keywords: controlled graded stretching, gradually oriented state, linear polymers, zone stretching device

Procedia PDF Downloads 402
18470 Functional Gene Expression in Human Cells Using Linear Vectors Derived from Bacteriophage N15 Processing

Authors: Kumaran Narayanan, Pei-Sheng Liew

Abstract:

This paper adapts the bacteriophage N15 protelomerase enzyme to assemble linear chromosomes as vectors for gene expression in human cells. Phage N15 has the unique ability to replicate as a linear plasmid with telomeres in E. coli during its prophage stage of life-cycle. The virus-encoded protelomerase enzyme cuts its circular genome and caps its ends to form hairpin telomeres, resulting in a linear human-chromosome-like structure in E. coli. In mammalian cells, however, no enzyme with TelN-like activities has been found. In this work, we show for the first-time transfer of the protelomerase from phage into human and mouse cells and demonstrate recapitulation of its activity in these hosts. The function of this enzyme is assayed by demonstrating cleavage of its target DNA, followed by detecting telomere formation based on its resistance to recBCD enzyme digestion. We show protelomerase expression persists for at least 60 days, which indicates limited silencing of its expression. Next, we show that an intact human β-globin gene delivered on this linear chromosome accurately retains its expression in the human cellular environment for at least 60 hours, demonstrating its stability and potential as a vector. These results demonstrate that the N15 protelomerse is able to function in mammalian cells to cut and heal DNA to create telomeres, which provides a new tool for creating novel structures by DNA resolution in these hosts.

Keywords: chromosome, beta-globin, DNA, gene expression, linear vector

Procedia PDF Downloads 170
18469 Numerical Investigation of Wire Mesh Heat Pipe for Spacecraft Applications

Authors: Jayesh Mahitkar, V. K. Singh, Surendra Singh Kachhwaha

Abstract:

Wire Mesh Heat Pipe (WMHP) as an effective component of thermal control system in the payload of spacecraft, utilizing ammonia to transfer efficient amount of heat. One dimensional generic and robust mathematical model with partial-analytical hydraulic approach (PAHA) is developed to study inside behaviour of WMHP. In this model, inside performance during operation is investigated like mass flow rate, and velocity along the wire mesh as well as vapour core is modeled respectively. This numerical model investigate heat flow along length, pressure drop along wire mesh as well as vapour line in axial direction. Furthermore, WMHP is modeled into equivalent resistance network such that total thermal resistance of heat pipe, temperature drop across evaporator end and condenser end is evaluated. This numerical investigation should be carried out for single layer and double layer wire mesh each with heat input at evaporator section is 10W, 20 W and 30 W at condenser temperature maintained at 20˚C.

Keywords: ammonia, heat transfer, modeling, wire mesh

Procedia PDF Downloads 252
18468 Analysis of Vertical Hall Effect Device Using Current-Mode

Authors: Kim Jin Sup

Abstract:

This paper presents a vertical hall effect device using current-mode. Among different geometries that have been studied and simulated using COMSOL Multiphysics, optimized cross-shaped model displayed the best sensitivity. The cross-shaped model emerged as the optimum plate to fit the lowest noise and residual offset and the best sensitivity. The symmetrical cross-shaped hall plate is widely used because of its high sensitivity and immunity to alignment tolerances resulting from the fabrication process. The hall effect device has been designed using a 0.18-μm CMOS technology. The simulation uses the nominal bias current of 12μA. The applied magnetic field is from 0 mT to 20 mT. Simulation results achieved in COMSOL and validated with respect to the electrical behavior of equivalent circuit for Cadence. Simulation results of the one structure over the 13 available samples shows for the best geometry a current-mode sensitivity of 6.6 %/T at 20mT. Acknowledgment: This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIP) (No. R7117-16-0165, Development of Hall Effect Semiconductor for Smart Car and Device).

Keywords: vertical hall device, current-mode, crossed-shaped model, CMOS technology

Procedia PDF Downloads 272
18467 Comparison between Bernardi’s Equation and Heat Flux Sensor Measurement as Battery Heat Generation Estimation Method

Authors: Marlon Gallo, Eduardo Miguel, Laura Oca, Eneko Gonzalez, Unai Iraola

Abstract:

The heat generation of an energy storage system is an essential topic when designing a battery pack and its cooling system. Heat generation estimation is used together with thermal models to predict battery temperature in operation and adapt the design of the battery pack and the cooling system to these thermal needs guaranteeing its safety and correct operation. In the present work, a comparison between the use of a heat flux sensor (HFS) for indirect measurement of heat losses in a cell and the widely used and simplified version of Bernardi’s equation for estimation is presented. First, a Li-ion cell is thermally characterized with an HFS to measure the thermal parameters that are used in a first-order lumped thermal model. These parameters are the equivalent thermal capacity and the thermal equivalent resistance of a single Li-ion cell. Static (when no current is flowing through the cell) and dynamic (making current flow through the cell) tests are conducted in which HFS is used to measure heat between the cell and the ambient, so thermal capacity and resistances respectively can be calculated. An experimental platform records current, voltage, ambient temperature, surface temperature, and HFS output voltage. Second, an equivalent circuit model is built in a Matlab-Simulink environment. This allows the comparison between the generated heat predicted by Bernardi’s equation and the HFS measurements. Data post-processing is required to extrapolate the heat generation from the HFS measurements, as the sensor records the heat released to the ambient and not the one generated within the cell. Finally, the cell temperature evolution is estimated with the lumped thermal model (using both HFS and Bernardi’s equation total heat generation) and compared towards experimental temperature data (measured with a T-type thermocouple). At the end of this work, a critical review of the results obtained and the possible mismatch reasons are reported. The results show that indirectly measuring the heat generation with HFS gives a more precise estimation than Bernardi’s simplified equation. On the one hand, when using Bernardi’s simplified equation, estimated heat generation differs from cell temperature measurements during charges at high current rates. Additionally, for low capacity cells where a small change in capacity has a great influence on the terminal voltage, the estimated heat generation shows high dependency on the State of Charge (SoC) estimation, and therefore open circuit voltage calculation (as it is SoC dependent). On the other hand, with indirect measuring the heat generation with HFS, the resulting error is a maximum of 0.28ºC in the temperature prediction, in contrast with 1.38ºC with Bernardi’s simplified equation. This illustrates the limitations of Bernardi’s simplified equation for applications where precise heat monitoring is required. For higher current rates, Bernardi’s equation estimates more heat generation and consequently, a higher predicted temperature. Bernardi´s equation accounts for no losses after cutting the charging or discharging current. However, HFS measurement shows that after cutting the current the cell continues generating heat for some time, increasing the error of Bernardi´s equation.

Keywords: lithium-ion battery, heat flux sensor, heat generation, thermal characterization

Procedia PDF Downloads 349
18466 Optimization of Bills Assignment to Different Skill-Levels of Data Entry Operators in a Business Process Outsourcing Industry

Authors: M. S. Maglasang, S. O. Palacio, L. P. Ogdoc

Abstract:

Business Process Outsourcing has been one of the fastest growing and emerging industry in the Philippines today. Unlike most of the contact service centers, more popularly known as "call centers", The BPO Industry’s primary outsourced service is performing audits of the global clients' logistics. As a service industry, manpower is considered as the most important yet the most expensive resource in the company. Because of this, there is a need to maximize the human resources so people are effectively and efficiently utilized. The main purpose of the study is to optimize the current manpower resources through effective distribution and assignment of different types of bills to the different skill-level of data entry operators. The assignment model parameters include the average observed time matrix gathered from through time study, which incorporates the learning curve concept. Subsequently, a simulation model was made to duplicate the arrival rate of demand which includes the different batches and types of bill per day. Next, a mathematical linear programming model was formulated. Its objective is to minimize direct labor cost per bill by allocating the different types of bills to the different skill-levels of operators. Finally, a hypothesis test was done to validate the model, comparing the actual and simulated results. The analysis of results revealed that the there’s low utilization of effective capacity because of its failure to determine the product-mix, skill-mix, and simulated demand as model parameters. Moreover, failure to consider the effects of learning curve leads to overestimation of labor needs. From 107 current number of operators, the proposed model gives a result of 79 operators. This results to an increase of utilization of effective capacity to 14.94%. It is recommended that the excess 28 operators would be reallocated to the other areas of the department. Finally, a manpower capacity planning model is also recommended in support to management’s decisions on what to do when the current capacity would reach its limit with the expected increasing demand.

Keywords: optimization modelling, linear programming, simulation, time and motion study, capacity planning

Procedia PDF Downloads 489
18465 Approach on Conceptual Design and Dimensional Synthesis of the Linear Delta Robot for Additive Manufacturing

Authors: Efrain Rodriguez, Cristhian Riano, Alberto Alvares

Abstract:

In recent years, robots manipulators with parallel architectures are used in additive manufacturing processes – 3D printing. These robots have advantages such as speed and lightness that make them suitable to help with the efficiency and productivity of these processes. Consequently, the interest for the development of parallel robots for additive manufacturing applications has increased. This article deals with the conceptual design and dimensional synthesis of the linear delta robot for additive manufacturing. Firstly, a methodology based on structured processes for the development of products through the phases of informational design, conceptual design and detailed design is adopted: a) In the informational design phase the Mudge diagram and the QFD matrix are used to aid a set of technical requirements, to define the form, functions and features of the robot. b) In the conceptual design phase, the functional modeling of the system through of an IDEF0 diagram is performed, and the solution principles for the requirements are formulated using a morphological matrix. This phase includes the description of the mechanical, electro-electronic and computational subsystems that constitute the general architecture of the robot. c) In the detailed design phase, a digital model of the robot is drawn on CAD software. A list of commercial and manufactured parts is detailed. Tolerances and adjustments are defined for some parts of the robot structure. The necessary manufacturing processes and tools are also listed, including: milling, turning and 3D printing. Secondly, a dimensional synthesis method applied on design of the linear delta robot is presented. One of the most important key factors in the design of a parallel robot is the useful workspace, which strongly depends on the joint space, the dimensions of the mechanism bodies and the possible interferences between these bodies. The objective function is based on the verification of the kinematic model for a prescribed cylindrical workspace, considering geometric constraints that possibly lead to singularities of the mechanism. The aim is to determine the minimum dimensional parameters of the mechanism bodies for the proposed workspace. A method based on genetic algorithms was used to solve this problem. The method uses a cloud of points with the cylindrical shape of the workspace and checks the kinematic model for each of the points within the cloud. The evolution of the population (point cloud) provides the optimal parameters for the design of the delta robot. The development process of the linear delta robot with optimal dimensions for additive manufacture is presented. The dimensional synthesis enabled to design the mechanism of the delta robot in function of the prescribed workspace. Finally, the implementation of the robotic platform developed based on a linear delta robot in an additive manufacturing application using the Fused Deposition Modeling (FDM) technique is presented.

Keywords: additive manufacturing, delta parallel robot, dimensional synthesis, genetic algorithms

Procedia PDF Downloads 167
18464 Effect of Variable Fluxes on Optimal Flux Distribution in a Metabolic Network

Authors: Ehsan Motamedian

Abstract:

Finding all optimal flux distributions of a metabolic model is an important challenge in systems biology. In this paper, a new algorithm is introduced to identify all alternate optimal solutions of a large scale metabolic network. The algorithm reduces the model to decrease computations for finding optimal solutions. The algorithm was implemented on the Escherichia coli metabolic model to find all optimal solutions for lactate and acetate production. There were more optimal flux distributions when acetate production was optimized. The model was reduced from 1076 to 80 variable fluxes for lactate while it was reduced to 91 variable fluxes for acetate. These 11 more variable fluxes resulted in about three times more optimal flux distributions. Variable fluxes were from 12 various metabolic pathways and most of them belonged to nucleotide salvage and extra cellular transport pathways.

Keywords: flux variability, metabolic network, mixed-integer linear programming, multiple optimal solutions

Procedia PDF Downloads 415
18463 Finding Data Envelopment Analysis Target Using the Multiple Objective Linear Programming Structure in Full Fuzzy Case

Authors: Raziyeh Shamsi

Abstract:

In this paper, we present a multiple objective linear programming (MOLP) problem in full fuzzy case and find Data Envelopment Analysis(DEA) targets. In the presented model, we are seeking the least inputs and the most outputs in the production possibility set (PPS) with the variable return to scale (VRS) assumption, so that the efficiency projection is obtained for all decision making units (DMUs). Then, we provide an algorithm for finding DEA targets interactively in the full fuzzy case, which solves the full fuzzy problem without defuzzification. Owing to the use of interactive methods, the targets obtained by our algorithm are more applicable, more realistic, and they are according to the wish of the decision maker. Finally, an application of the algorithm in 21 educational institutions is provided.

Keywords: DEA, MOLP, full fuzzy, target

Procedia PDF Downloads 283
18462 Numerical Simulation of Turbulent Flow around Two Cam Shaped Cylinders in Tandem Arrangement

Authors: Arash Mir Abdolah Lavasani, M. Ebrahimisabet

Abstract:

In this paper, the 2-D unsteady viscous flow around two cam shaped cylinders in tandem arrangement is numerically simulated in order to study the characteristics of the flow in turbulent regimes. The investigation covers the effects of high subcritical and supercritical Reynolds numbers and L/D ratio on total drag coefficient. The equivalent diameter of cylinders is 27.6 mm The space between center to center of two cam shaped cylinders is define as longitudinal pitch ratio and it varies in range of 1.5 < L/D < 6. Reynolds number base on equivalent circular cylinder varies in range of 27×103 < Re < 166×103 Results show that drag coefficient of both cylinders depends on pitch ratio. However drag coefficient of downstream cylinder is more dependent on the pitch ratio.

Keywords: cam shaped, tandem, numerical, drag coefficient, turbulent

Procedia PDF Downloads 444
18461 Evolution of Cord Absorbed Dose during Larynx Cancer Radiotherapy, with 3D Treatment Planning and Tissue Equivalent Phantom

Authors: Mohammad Hassan Heidari, Amir Hossein Goodarzi, Majid Azarniush

Abstract:

Radiation doses to tissues and organs were measured using the anthropomorphic phantom as an equivalent to the human body. When high-energy X-rays are externally applied to treat laryngeal cancer, the absorbed dose at the laryngeal lumen is lower than given dose because of air space which it should pass through before reaching the lesion. Specially in case of high-energy X-rays, the loss of dose is considerable. Three-dimensional absorbed dose distributions have been computed for high-energy photon radiation therapy of laryngeal and hypo pharyngeal cancers, using a coaxial pair of opposing lateral beams in fixed positions. Treatment plans obtained under various conditions of irradiation.

Keywords: 3D treatment planning, anthropomorphic phantom, larynx cancer, radiotherapy

Procedia PDF Downloads 523
18460 Starting Order Eight Method Accurately for the Solution of First Order Initial Value Problems of Ordinary Differential Equations

Authors: James Adewale, Joshua Sunday

Abstract:

In this paper, we developed a linear multistep method, which is implemented in predictor corrector-method. The corrector is developed by method of collocation and interpretation of power series approximate solutions at some selected grid points, to give a continuous linear multistep method, which is evaluated at some selected grid points to give a discrete linear multistep method. The predictors were also developed by method of collocation and interpolation of power series approximate solution, to give a continuous linear multistep method. The continuous linear multistep method is then solved for the independent solution to give a continuous block formula, which is evaluated at some selected grid point to give discrete block method. Basic properties of the corrector were investigated and found to be zero stable, consistent and convergent. The efficiency of the method was tested on some linear, non-learn, oscillatory and stiff problems of first order, initial value problems of ordinary differential equations. The results were found to be better in terms of computer time and error bound when compared with the existing methods.

Keywords: predictor, corrector, collocation, interpolation, approximate solution, independent solution, zero stable, consistent, convergent

Procedia PDF Downloads 478
18459 Three-Dimensional Numerical Investigation for Reinforced Concrete Slabs with Opening

Authors: Abdelrahman Elsehsah, Hany Madkour, Khalid Farah

Abstract:

This article presents a 3-D modified non-linear elastic model in the strain space. The Helmholtz free energy function is introduced with the existence of a dissipation potential surface in the space of thermodynamic conjugate forces. The constitutive equation and the damage evolution were derived as well. The modified damage has been examined to model the nonlinear behavior of reinforced concrete (RC) slabs with an opening. A parametric study with RC was carried out to investigate the impact of different factors on the behavior of RC slabs. These factors are the opening area, the opening shape, the place of opening, and the thickness of the slabs. And the numerical results have been compared with the experimental data from literature. Finally, the model showed its ability to be applied to the structural analysis of RC slabs.

Keywords: damage mechanics, 3-D numerical analysis, RC, slab with opening

Procedia PDF Downloads 153
18458 Real-Time Classification of Hemodynamic Response by Functional Near-Infrared Spectroscopy Using an Adaptive Estimation of General Linear Model Coefficients

Authors: Sahar Jahani, Meryem Ayse Yucel, David Boas, Seyed Kamaledin Setarehdan

Abstract:

Near-infrared spectroscopy allows monitoring of oxy- and deoxy-hemoglobin concentration changes associated with hemodynamic response function (HRF). HRF is usually affected by natural physiological hemodynamic (systemic interferences) which occur in all body tissues including brain tissue. This makes HRF extraction a very challenging task. In this study, we used Kalman filter based on a general linear model (GLM) of brain activity to define the proportion of systemic interference in the brain hemodynamic. The performance of the proposed algorithm is evaluated in terms of the peak to peak error (Ep), mean square error (MSE), and Pearson’s correlation coefficient (R2) criteria between the estimated and the simulated hemodynamic responses. This technique also has the ability of real time estimation of single trial functional activations as it was applied to classify finger tapping versus resting state. The average real-time classification accuracy of 74% over 11 subjects demonstrates the feasibility of developing an effective functional near infrared spectroscopy for brain computer interface purposes (fNIRS-BCI).

Keywords: hemodynamic response function, functional near-infrared spectroscopy, adaptive filter, Kalman filter

Procedia PDF Downloads 135
18457 Mathematical Modeling of District Cooling Systems

Authors: Dana Alghool, Tarek ElMekkawy, Mohamed Haouari, Adel Elomari

Abstract:

District cooling systems have captured the attentions of many researchers recently due to the enormous benefits offered by such system in comparison with traditional cooling technologies. It is considered a major component of urban cities due to the significant reduction of energy consumption. This paper aims to find the optimal design and operation of district cooling systems by developing a mixed integer linear programming model to minimize the annual total system cost and satisfy the end-user cooling demand. The proposed model is experimented with different cooling demand scenarios. The results of the very high cooling demand scenario are only presented in this paper. A sensitivity analysis on different parameters of the model was performed.

Keywords: Annual Cooling Demand, Compression Chiller, Mathematical Modeling, District Cooling Systems, Optimization

Procedia PDF Downloads 180
18456 Development of Graph-Theoretic Model for Ranking Top of Rail Lubricants

Authors: Subhash Chandra Sharma, Mohammad Soleimani

Abstract:

Selection of the correct lubricant for the top of rail application is a complex process. In this paper, the selection of the proper lubricant for a Top-Of-Rail (TOR) lubrication system based on graph theory and matrix approach has been developed. Attributes influencing the selection process and their influence on each other has been represented through a digraph and an equivalent matrix. A matrix function which is called the Permanent Function is derived. By substituting the level of inherent contribution of the influencing parameters and their influence on each other qualitatively, a criterion called Suitability Index is derived. Based on these indices, lubricants can be ranked for their suitability. The proposed model can be useful for maintenance engineers in selecting the best lubricant for a TOR application. The proposed methodology is illustrated step–by-step through an example.

Keywords: lubricant selection, top of rail lubrication, graph-theory, Ranking of lubricants

Procedia PDF Downloads 272
18455 Competition between Regression Technique and Statistical Learning Models for Predicting Credit Risk Management

Authors: Chokri Slim

Abstract:

The objective of this research is attempting to respond to this question: Is there a significant difference between the regression model and statistical learning models in predicting credit risk management? A Multiple Linear Regression (MLR) model was compared with neural networks including Multi-Layer Perceptron (MLP), and a Support vector regression (SVR). The population of this study includes 50 listed Banks in Tunis Stock Exchange (TSE) market from 2000 to 2016. Firstly, we show the factors that have significant effect on the quality of loan portfolios of banks in Tunisia. Secondly, it attempts to establish that the systematic use of objective techniques and methods designed to apprehend and assess risk when considering applications for granting credit, has a positive effect on the quality of loan portfolios of banks and their future collectability. Finally, we will try to show that the bank governance has an impact on the choice of methods and techniques for analyzing and measuring the risks inherent in the banking business, including the risk of non-repayment. The results of empirical tests confirm our claims.

Keywords: credit risk management, multiple linear regression, principal components analysis, artificial neural networks, support vector machines

Procedia PDF Downloads 125
18454 H∞ Sampled-Data Control for Linear Systems Time-Varying Delays: Application to Power System

Authors: Chang-Ho Lee, Seung-Hoon Lee, Myeong-Jin Park, Oh-Min Kwon

Abstract:

This paper investigates improved stability criteria for sampled-data control of linear systems with disturbances and time-varying delays. Based on Lyapunov-Krasovskii stability theory, delay-dependent conditions sufficient to ensure H∞ stability for the system are derived in the form of linear matrix inequalities(LMI). The effectiveness of the proposed method will be shown in numerical examples.

Keywords: sampled-data control system, Lyapunov-Krasovskii functional, time delay-dependent, LMI, H∞ control

Procedia PDF Downloads 301
18453 Simulation of Gamma Rays Attenuation Coefficient for Some common Shielding Materials Using Monte Carlo Program

Authors: Cherief Houria, Fouka Mourad

Abstract:

In this work, the simulation of the radiation attenuation is carried out in a photon detector consisting of different common shielding material using a Monte Carlo program called PTM. The aim of the study is to investigate the effect of atomic weight and the thickness of shielding materials on the gamma radiation attenuation ability. The linear attenuation coefficients of Aluminum (Al), Iron (Fe), and lead (Pb) elements were evaluated at photons energy of 661:7KeV that are considered to be emitted from a standard radioactive point source Cs 137. The experimental measurements have been performed for three materials to obtain these linear attenuation coefficients, using a Gamma NaI(Tl) scintillation detector. Our results have been compared with the simulation results of the linear attenuation coefficient using the XCOM database and Geant4 codes and reveal that they are well agreed with both simulation data.

Keywords: gamma photon, Monte Carlo program, radiation attenuation, shielding material, the linear attenuation coefficient

Procedia PDF Downloads 182
18452 Solving Fuzzy Multi-Objective Linear Programming Problems with Fuzzy Decision Variables

Authors: Mahnaz Hosseinzadeh, Aliyeh Kazemi

Abstract:

In this paper, a method is proposed for solving Fuzzy Multi-Objective Linear Programming problems (FMOLPP) with fuzzy right hand side and fuzzy decision variables. To illustrate the proposed method, it is applied to the problem of selecting suppliers for an automotive parts producer company in Iran in order to find the number of optimal orders allocated to each supplier considering the conflicting objectives. Finally, the obtained results are discussed.

Keywords: fuzzy multi-objective linear programming problems, triangular fuzzy numbers, fuzzy ranking, supplier selection problem

Procedia PDF Downloads 358
18451 Low Power, Highly Linear, Wideband LNA in Wireless SOC

Authors: Amir Mahdavi

Abstract:

In this paper a highly linear CMOS low noise amplifier (LNA) for ultra-wideband (UWB) applications is proposed. The proposed LNA uses a linearization technique to improve second and third-order intercept points (IIP3). The linearity is cured by repealing the common-mode section of all intermodulation components from the cascade topology current with optimization of biasing current use symmetrical and asymmetrical circuits for biasing. Simulation results show that maximum gain and noise figure are 6.9dB and 3.03-4.1dB over a 3.1–10.6 GHz, respectively. Power consumption of the LNA core and IIP3 are 2.64 mW and +4.9dBm respectively. The wideband input impedance matching of LNA is obtained by employing a degenerating inductor (|S11|<-9.1 dB). The circuit proposed UWB LNA is implemented using 0.18 μm based CMOS technology.

Keywords: highly linear LNA, low-power LNA, optimal bias techniques

Procedia PDF Downloads 261
18450 A Mixed 3D Finite Element for Highly Deformable Thermoviscoplastic Materials Under Ductile Damage

Authors: João Paulo Pascon

Abstract:

In this work, a mixed 3D finite element formulation is proposed in order to analyze thermoviscoplastic materials under large strain levels and ductile damage. To this end, a tetrahedral element of linear order is employed, considering a thermoviscoplastic constitutive law together with the neo-Hookean hyperelastic relationship and a nonlocal Gurson`s porous plasticity theory The material model is capable of reproducing finite deformations, elastoplastic behavior, void growth, nucleation and coalescence, thermal effects such as plastic work heating and conductivity, strain hardening and strain-rate dependence. The nonlocal character is introduced by means of a nonlocal parameter applied to the Laplacian of the porosity field. The element degrees of freedom are the nodal values of the deformed position, the temperature and the nonlocal porosity field. The internal variables are updated at the Gauss points according to the yield criterion and the evolution laws, including the yield stress of matrix, the equivalent plastic strain, the local porosity and the plastic components of the Cauchy-Green stretch tensor. Two problems involving 3D specimens and ductile damage are numerically analyzed with the developed computational code: the necking problem and a notched sample. The effect of the nonlocal parameter and the mesh refinement is investigated in detail. Results indicate the need of a proper nonlocal parameter. In addition, the numerical formulation can predict ductile fracture, based on the evolution of the fully damaged zone.

Keywords: mixed finite element, large strains, ductile damage, thermoviscoplasticity

Procedia PDF Downloads 66
18449 Vendor Selection and Supply Quotas Determination by Using Revised Weighting Method and Multi-Objective Programming Methods

Authors: Tunjo Perič, Marin Fatović

Abstract:

In this paper a new methodology for vendor selection and supply quotas determination (VSSQD) is proposed. The problem of VSSQD is solved by the model that combines revised weighting method for determining the objective function coefficients, and a multiple objective linear programming (MOLP) method based on the cooperative game theory for VSSQD. The criteria used for VSSQD are: (1) purchase costs and (2) product quality supplied by individual vendors. The proposed methodology is tested on the example of flour purchase for a bakery with two decision makers.

Keywords: cooperative game theory, multiple objective linear programming, revised weighting method, vendor selection

Procedia PDF Downloads 332
18448 Robustness Analysis of the Carbon and Nitrogen Co-Metabolism Model of Mucor mucedo

Authors: Nahid Banihashemi

Abstract:

An emerging important area of the life sciences is systems biology, which involves understanding the integrated behavior of large numbers of components interacting via non-linear reaction terms. A centrally important problem in this area is an understanding of the co-metabolism of protein and carbohydrate, as it has been clearly demonstrated that the ratio of these metabolites in diet is a major determinant of obesity and related chronic disease. In this regard, we have considered a systems biology model for the co-metabolism of carbon and nitrogen in colonies of the fungus Mucor mucedo. Oscillations are an important diagnostic of underlying dynamical processes of this model. The maintenance of specific patterns of oscillation and its relation to the robustness of this system are the important issues which have been targeted in this paper. In this regard, parametric sensitivity approach as a theoretical approach has been considered for the analysis of the robustness of this model. As a result, the parameters of the model which produce the largest sensitivities have been identified. Furthermore, the largest changes that can be made in each parameter of the model without losing the oscillations in biomass production have been computed. The results are obtained from the implementation of parametric sensitivity analysis in Matlab.

Keywords: system biology, parametric sensitivity analysis, robustness, carbon and nitrogen co-metabolism, Mucor mucedo

Procedia PDF Downloads 302
18447 Comparison of Water Equivalent Ratio of Several Dosimetric Materials in Proton Therapy Using Monte Carlo Simulations and Experimental Data

Authors: M. R. Akbari , H. Yousefnia, E. Mirrezaei

Abstract:

Range uncertainties of protons are currently a topic of interest in proton therapy. Two of the parameters that are often used to specify proton range are water equivalent thickness (WET) and water equivalent ratio (WER). Since WER values for a specific material is nearly constant at different proton energies, it is a more useful parameter to compare. In this study, WER values were calculated for different proton energies in polymethyl methacrylate (PMMA), polystyrene (PS) and aluminum (Al) using FLUKA and TRIM codes. The results were compared with analytical, experimental and simulated SEICS code data obtained from the literature. In FLUKA simulation, a cylindrical phantom, 1000 mm in height and 300 mm in diameter, filled with the studied materials was simulated. A typical mono-energetic proton pencil beam in a wide range of incident energies usually applied in proton therapy (50 MeV to 225 MeV) impinges normally on the phantom. In order to obtain the WER values for the considered materials, cylindrical detectors, 1 mm in height and 20 mm in diameter, were also simulated along the beam trajectory in the phantom. In TRIM calculations, type of projectile, energy and angle of incidence, type of target material and thickness should be defined. The mode of 'detailed calculation with full damage cascades' was selected for proton transport in the target material. The biggest difference in WER values between the codes was 3.19%, 1.9% and 0.67% for Al, PMMA and PS, respectively. In Al and PMMA, the biggest difference between each code and experimental data was 1.08%, 1.26%, 2.55%, 0.94%, 0.77% and 0.95% for SEICS, FLUKA and SRIM, respectively. FLUKA and SEICS had the greatest agreement (≤0.77% difference in PMMA and ≤1.08% difference in Al, respectively) with the available experimental data in this study. It is concluded that, FLUKA and TRIM codes have capability for Bragg curves simulation and WER values calculation in the studied materials. They can also predict Bragg peak location and range of proton beams with acceptable accuracy.

Keywords: water equivalent ratio, dosimetric materials, proton therapy, Monte Carlo simulations

Procedia PDF Downloads 294
18446 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study

Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming

Abstract:

Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.

Keywords: binary outcomes, statistical methods, clinical trials, simulation study

Procedia PDF Downloads 87
18445 A Hybrid Method for Determination of Effective Poles Using Clustering Dominant Pole Algorithm

Authors: Anuj Abraham, N. Pappa, Daniel Honc, Rahul Sharma

Abstract:

In this paper, an analysis of some model order reduction techniques is presented. A new hybrid algorithm for model order reduction of linear time invariant systems is compared with the conventional techniques namely Balanced Truncation, Hankel Norm reduction and Dominant Pole Algorithm (DPA). The proposed hybrid algorithm is known as Clustering Dominant Pole Algorithm (CDPA) is able to compute the full set of dominant poles and its cluster center efficiently. The dominant poles of a transfer function are specific eigenvalues of the state space matrix of the corresponding dynamical system. The effectiveness of this novel technique is shown through the simulation results.

Keywords: balanced truncation, clustering, dominant pole, Hankel norm, model reduction

Procedia PDF Downloads 581
18444 A Model for Optimizing Inventory Replenishment and Shelf Space Management in Retail Industries

Authors: Nermine A. Harraz, Aliaa Abouali

Abstract:

The retail stores put up for sale multiple items while the spaces in the backroom and display areas constitute a scarce resource. Availability, volume, and location of the product displayed in the showroom influence the customer’s demand. Managing these operations individually will result in sub-optimal overall retail store’s profit; therefore, a non-linear integer programming model (NLIP) is developed to determine the inventory replenishment and shelf space allocation decisions that together maximize the retailer’s profit under shelf space and backroom storage constraints taking into consideration that the demand rate is positively dependent on the amount and location of items displayed in the showroom. The developed model is solved using LINGO® software. The NLIP model is implemented in a real world case study in a large retail outlet providing a large variety of products. The proposed model is validated and shows logical results when using the experimental data collected from the market.

Keywords: retailing management, inventory replenishment, shelf space allocation, showroom, backroom

Procedia PDF Downloads 332
18443 Simple Rheological Method to Estimate the Branch Structures of Polyethylene under Reactive Modification

Authors: Mahdi Golriz

Abstract:

The aim of this work is to estimate the change in molecular structure of linear low-density polyethylene (LLDPE) during peroxide modification can be detected by a simple rheological method. For this purpose a commercial grade LLDPE (Exxon MobileTM LL4004EL) was reacted with different doses of dicumyl peroxide (DCP). The samples were analyzed by size-exclusion chromatography coupled with a light scattering detector. The dynamic shear oscillatory measurements showed a deviation of the δ-׀G ׀٭curve from that of the linear LLDPE, which can be attributed to the presence of long-chain branching (LCB). By the use of a simple rheological method that utilizes melt rheology, transformations in molecular architecture induced on an originally linear low density polyethylene during the early stages of reactive modification were indicated. Reasonable and consistent estimates are obtained, concerning the degree of LCB, the volume fraction of the various molecular species produced in peroxide modification of LLDPE.

Keywords: linear low-density polyethylene, peroxide modification, long-chain branching, rheological method

Procedia PDF Downloads 131
18442 Study on Optimal Control Strategy of PM2.5 in Wuhan, China

Authors: Qiuling Xie, Shanliang Zhu, Zongdi Sun

Abstract:

In this paper, we analyzed the correlation relationship among PM2.5 from other five Air Quality Indices (AQIs) based on the grey relational degree, and built a multivariate nonlinear regression equation model of PM2.5 and the five monitoring indexes. For the optimal control problem of PM2.5, we took the partial large Cauchy distribution of membership equation as satisfaction function. We established a nonlinear programming model with the goal of maximum performance to price ratio. And the optimal control scheme is given.

Keywords: grey relational degree, multiple linear regression, membership function, nonlinear programming

Procedia PDF Downloads 277