Search results for: non-linear curve fitting
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2575

Search results for: non-linear curve fitting

475 Value Index, a Novel Decision Making Approach for Waste Load Allocation

Authors: E. Feizi Ashtiani, S. Jamshidi, M.H Niksokhan, A. Feizi Ashtiani

Abstract:

Waste load allocation (WLA) policies may use multi-objective optimization methods to find the most appropriate and sustainable solutions. These usually intend to simultaneously minimize two criteria, total abatement costs (TC) and environmental violations (EV). If other criteria, such as inequity, need for minimization as well, it requires introducing more binary optimizations through different scenarios. In order to reduce the calculation steps, this study presents value index as an innovative decision making approach. Since the value index contains both the environmental violation and treatment costs, it can be maximized simultaneously with the equity index. It implies that the definition of different scenarios for environmental violations is no longer required. Furthermore, the solution is not necessarily the point with minimized total costs or environmental violations. This idea is testified for Haraz River, in north of Iran. Here, the dissolved oxygen (DO) level of river is simulated by Streeter-Phelps equation in MATLAB software. The WLA is determined for fish farms using multi-objective particle swarm optimization (MOPSO) in two scenarios. At first, the trade-off curves of TC-EV and TC-Inequity are plotted separately as the conventional approach. In the second, the Value-Equity curve is derived. The comparative results show that the solutions are in a similar range of inequity with lower total costs. This is due to the freedom of environmental violation attained in value index. As a result, the conventional approach can well be replaced by the value index particularly for problems optimizing these objectives. This reduces the process to achieve the best solutions and may find better classification for scenario definition. It is also concluded that decision makers are better to focus on value index and weighting its contents to find the most sustainable alternatives based on their requirements.

Keywords: waste load allocation (WLA), value index, multi objective particle swarm optimization (MOPSO), Haraz River, equity

Procedia PDF Downloads 417
474 Study of Radiation Response in Lactobacillus Species

Authors: Kanika Arora, Madhu Bala

Abstract:

The small intestine epithelium is highly sensitive and major targets of ionizing radiation. Radiation causes gastrointestinal toxicity either by direct deposition of energy or indirectly (inflammation or bystander effects) generating free radicals and reactive oxygen species. Oxidative stress generated as a result of radiation causes active inflammation within the intestinal mucosa leading to structural and functional impairment of gut epithelial barrier. As a result, there is a loss of tolerance to normal dietary antigens and commensal flora together with exaggerated response to pathogens. Dysbiosis may therefore thought to play a role in radiation enteropathy and can contribute towards radiation induced bowel toxicity. Lactobacilli residing in the gut shares a long conjoined evolutionary history with their hosts and by doing so these organisms have developed an intimate and complex symbiotic relationships. The objective behind this study was to look for the strains with varying resistance to ionizing radiation and to see whether the niche of the bacteria is playing any role in radiation resistance property of bacteria. In this study, we have isolated the Lactobacillus spp. from probiotic preparation and murine gastrointestinal tract, both of which were supposed to be the important source for its isolation. Biochemical characterization did not show a significant difference in the properties, while a significant preference was observed in carbohydrate utilization capacity by the isolates. Effect of ionizing radiations induced by Co60 gamma radiation (10 Gy) on lactobacilli cells was investigated. A cellular survival curve versus absorbed doses was determined. Radiation resistance studies showed that the response of isolates towards cobalt-60 gamma radiation differs from each other and significant decrease in survival was observed in a dose-dependent manner. Thus the present study revealed that the property of radioresistance in Lactobacillus depends upon the source from where they have been isolated.

Keywords: dysbiosis, lactobacillus, mitigation, radiation

Procedia PDF Downloads 132
473 Robotic Mini Gastric Bypass Surgery

Authors: Arun Prasad, Abhishek Tiwari, Rekha Jaiswal, Vivek Chaudhary

Abstract:

Background: Robotic Roux en Y gastric bypass is being done for some time but is technically difficult, requiring operating in both the sub diaphragmatic and infracolic compartments of the abdomen. This can mean a dual docking of the robot or a hybrid partial laparoscopic and partial robotic surgery. The Mini /One anastomosis /omega loop gastric bypass (MGB) has the advantage of having all dissection and anastomosis in the supracolic compartment and is therefore suitable technically for robotic surgery. Methods: We have done 208 robotic mini gastric bypass surgeries. The robot is docked above the head of the patient in the midline. Camera port is placed supra umbilically. Two ports are placed on the left side of the patient and one port on the right side of the patient. An assistant port is placed between the camera port and right sided robotic port for use of stapler. Distal stomach is stapled from the lesser curve followed by a vertical sleeve upwards leading to a long sleeve pouch. Jejunum is taken at 200 cm from the duodenojejunal junction and brought up to do a side to side gastrojejunostomy. Results: All patients had a successful robotic procedure. Mean time taken was 85 minutes. There were major intraoperative or post operative complications. No patient needed conversion or re-explorative surgery. Mean excess weight loss over a period of 2 year was about 75%. There was no mortality. Patient satisfaction score was high and was attributed to the good weight loss and minimal dietary modifications that were needed after the procedure. Long term side effects were anemia and bile reflux in a small number of patients. Conclusions: MGB / OAGB is gaining worldwide interest as a short simple procedure that has been shown to very effective and safe bariatric surgery. The purpose of this study was to report on the safety and efficacy of robotic surgery for this procedure. This is the first report of totally robotic mini gastric bypass.

Keywords: MGB, mini gastric bypass, OAGB, robotic bariatric surgery

Procedia PDF Downloads 293
472 Non-Convex Multi Objective Economic Dispatch Using Ramp Rate Biogeography Based Optimization

Authors: Susanta Kumar Gachhayat, S. K. Dash

Abstract:

Multi objective non-convex economic dispatch problems of a thermal power plant are of grave concern for deciding the cost of generation and reduction of emission level for diminishing the global warming level for improving green-house effect. This paper deals with ramp rate constraints for achieving better inequality constraints so as to incorporate valve point loading for cost of generation in thermal power plant through ramp rate biogeography based optimization involving mutation and migration. Through 50 out of 100 trials, the cost function and emission objective function were found to have outperformed other classical methods such as lambda iteration method, quadratic programming method and many heuristic methods like particle swarm optimization method, weight improved particle swarm optimization method, constriction factor based particle swarm optimization method, moderate random particle swarm optimization method etc. Ramp rate biogeography based optimization applications prove quite advantageous in solving non convex multi objective economic dispatch problems subjected to nonlinear loads that pollute the source giving rise to third harmonic distortions and other such disturbances.

Keywords: economic load dispatch, ELD, biogeography-based optimization, BBO, ramp rate biogeography-based optimization, RRBBO, valve-point loading, VPL

Procedia PDF Downloads 374
471 Removal of Rhodamine B from Aqueous Solution Using Natural Clay by Fixed Bed Column Method

Authors: A. Ghribi, M. Bagane

Abstract:

The discharge of dye in industrial effluents is of great concern because their presence and accumulation have a toxic or carcinogenic effect on living species. The removal of such compounds at such low levels is a difficult problem. The adsorption process is an effective and attractive proposition for the treatment of dye contaminated wastewater. Activated carbon adsorption in fixed beds is a very common technology in the treatment of water and especially in processes of decolouration. However, it is expensive and the powdered one is difficult to be separated from aquatic system when it becomes exhausted or the effluent reaches the maximum allowable discharge level. The regeneration of exhausted activated carbon by chemical and thermal procedure is also expensive and results in loss of the sorbent. The focus of this research was to evaluate the adsorption potential of the raw clay in removing rhodamine B from aqueous solutions using a laboratory fixed-bed column. The continuous sorption process was conducted in this study in order to simulate industrial conditions. The effect of process parameters, such as inlet flow rate, adsorbent bed height, and initial adsorbate concentration on the shape of breakthrough curves was investigated. A glass column with an internal diameter of 1.5 cm and height of 30 cm was used as a fixed-bed column. The pH of feed solution was set at 8.5. Experiments were carried out at different bed heights (5 - 20 cm), influent flow rates (1.6- 8 mL/min) and influent rhodamine B concentrations (20 - 80 mg/L). The obtained results showed that the adsorption capacity increases with the bed depth and the initial concentration and it decreases at higher flow rate. The column regeneration was possible for four adsorption–desorption cycles. The clay column study states the value of the excellent adsorption capacity for the removal of rhodamine B from aqueous solution. Uptake of rhodamine B through a fixed-bed column was dependent on the bed depth, influent rhodamine B concentration, and flow rate.

Keywords: adsorption, breakthrough curve, clay, fixed bed column, rhodamine b, regeneration

Procedia PDF Downloads 270
470 A Study of a Plaque Inhibition Through Stenosed Bifurcation Artery considering a Biomagnetic Blood Flow and Elastic Walls

Authors: M. A. Anwar, K. Iqbal, M. Razzaq

Abstract:

Background and Objectives: This numerical study reflects the magnetic field's effect on the reduction of plaque formation due to stenosis in a stenosed bifurcated artery. The entire arterythe wall is assumed as linearly elastic, and blood flow is modeled as a Newtonian, viscous, steady, incompressible, laminar, biomagnetic fluid. Methods: An Arbitrary Lagrangian-Eulerian (ALE) technique is employed to formulate the hemodynamic flow in a bifurcated artery under the effect of the asymmetric magnetic field by two-way Fluid-structure interaction coupling. A stable P2P1 finite element pair is used to discretize thenonlinear system of partial differential equations. The resulting nonlinear system of algebraic equations is solved by the Newton Raphson method. Results: The numerical results for displacement, velocity magnitude, pressure, and wall shear stresses for Reynolds numbers, Re = 500, 1000, 1500, 2000, in the presence of magnetic fields are presented graphically. Conclusions: The numerical results show that the presence of the magnetic field influences the displacement and flows velocity magnitude considerably. The magnetic field reduces the flow separation, recirculation area adjacent to stenosis and gives rise to wall shear stress.

Keywords: bifurcation, elastic walls, finite element, wall shear stress,

Procedia PDF Downloads 172
469 Unified Power Quality Conditioner Presentation and Dimensioning

Authors: Abderrahmane Kechich, Othmane Abdelkhalek

Abstract:

Static converters behave as nonlinear loads that inject harmonic currents into the grid and increase the consumption of the inactive power. On the other hand, the increased use of sensitive equipment requires the application of sinusoidal voltages. As a result, the electrical power quality control has become a major concern in the field of power electronics. In this context, the active power conditioner (UPQC) was developed. It combines both serial and parallel structures; the series filter can protect sensitive loads and compensate for voltage disturbances such as voltage harmonics, voltage dips or flicker when the shunt filter compensates for current disturbances such as current harmonics, reactive currents and imbalance. This double feature is that it is one of the most appropriate devices. Calculating parameters is an important step and in the same time it’s not easy for that reason several researchers based on trial and error method for calculating parameters but this method is not easy for beginners researchers especially what about the controller’s parameters, for that reason this paper gives a mathematical way to calculate of almost all of UPQC parameters away from trial and error method. This paper gives also a new approach for calculating of PI regulators parameters for purpose to have a stable UPQC able to compensate for disturbances acting on the waveform of line voltage and load current in order to improve the electrical power quality.

Keywords: UPQC, Shunt active filer, series active filer, PI controller, PWM control, dual-loop control

Procedia PDF Downloads 397
468 Lightweight Ceramics from Clay and Ground Corncobs

Authors: N.Quaranta, M. Caligaris, R. Varoli, A. Cristobal, M. Unsen, H. López

Abstract:

Corncobs are agricultural wastes and they can be used as fuel or as raw material in different industrial processes like cement manufacture, contaminant adsorption, chemical compound synthesis, etc. The aim of this work is to characterize this waste and analyze the feasibility of its use as a pore-forming material in the manufacture of lightweight ceramics for the civil construction industry. The characterization of raw materials is carried out by using various techniques: electron diffraction analysis X-ray, differential and gravimetric thermal analyses, FTIR spectroscopy, ecotoxicity evaluation, among others. The ground corncobs, particle size less than 2 mm, are mixed with clay up to 30% in volume and shaped by uniaxial pressure of 25 MPa, with 6% humidity, in moulds of 70mm x 40mm x 18mm. Then the green bodies are heat treated at 950°C for two hours following the treatment curves used in ceramic industry. The ceramic probes are characterized by several techniques: density, porosity and water absorption, permanent volumetric variation, loss on ignition, microscopies analysis, and mechanical properties. DTA-TGA analysis of corncobs shows in the range 20°-250°C a small loss in TGA curve and exothermic peaks at 250°-500°C. FTIR spectrum of the corncobs sample shows the characteristic pattern of this kind of organic matter with stretching vibration bands of adsorbed water, methyl groups, C–O and C–C bonds, and the complex form of the cellulose and hemicellulose glycosidic bonds. The obtained ceramic bodies present external good characteristics without loose edges and adequate properties for the market requirements. The porosity values of the sintered pieces are higher than those of the reference sample without waste addition. The results generally indicate that it is possible to use corncobs as porosity former in ceramic bodies without modifying the usual sintering temperatures employed in the industry.

Keywords: ceramic industry, biomass, recycling, hemicellulose glycosidic bonds

Procedia PDF Downloads 403
467 The Effect of Mathematical Modeling of Damping on the Seismic Energy Demands

Authors: Selamawit Dires, Solomon Tesfamariam, Thomas Tannert

Abstract:

Modern earthquake engineering and design encompass performance-based design philosophy. The main objective in performance-based design is to achieve a system performing precisely to meet the design objectives so to reduce unintended seismic risks and associated losses. Energy-based earthquake-resistant design is one of the design methodologies that can be implemented in performance-based earthquake engineering. In energy-based design, the seismic demand is usually described as the ratio of the hysteretic to input energy. Once the hysteretic energy is known as a percentage of the input energy, it is distributed among energy-dissipating components of a structure. The hysteretic to input energy ratio is highly dependent on the inherent damping of a structural system. In numerical analysis, damping can be modeled as stiffness-proportional, mass-proportional, or a linear combination of stiffness and mass. In this study, the effect of mathematical modeling of damping on the estimation of seismic energy demands is investigated by considering elastic-perfectly-plastic single-degree-of-freedom systems representing short to long period structures. Furthermore, the seismicity of Vancouver, Canada, is used in the nonlinear time history analysis. According to the preliminary results, the input energy demand is not sensitive to the type of damping models deployed. Hence, consistent results are achieved regardless of the damping models utilized in the numerical analyses. On the other hand, the hysteretic to input energy ratios vary significantly for the different damping models.

Keywords: damping, energy-based seismic design, hysteretic energy, input energy

Procedia PDF Downloads 164
466 Performance Based Design of Masonry Infilled Reinforced Concrete Frames for Near-Field Earthquakes Using Energy Methods

Authors: Alok Madan, Arshad K. Hashmi

Abstract:

Performance based design (PBD) is an iterative exercise in which a preliminary trial design of the building structure is selected and if the selected trial design of the building structure does not conform to the desired performance objective, the trial design is revised. In this context, development of a fundamental approach for performance based seismic design of masonry infilled frames with minimum number of trials is an important objective. The paper presents a plastic design procedure based on the energy balance concept for PBD of multi-story multi-bay masonry infilled reinforced concrete (R/C) frames subjected to near-field earthquakes. The proposed energy based plastic design procedure was implemented for trial performance based seismic design of representative masonry infilled reinforced concrete frames with various practically relevant distributions of masonry infill panels over the frame elevation. Non-linear dynamic analyses of the trial PBD of masonry infilled R/C frames was performed under the action of near-field earthquake ground motions. The results of non-linear dynamic analyses demonstrate that the proposed energy method is effective for performance based design of masonry infilled R/C frames under near-field as well as far-field earthquakes.

Keywords: masonry infilled frame, energy methods, near-fault ground motions, pushover analysis, nonlinear dynamic analysis, seismic demand

Procedia PDF Downloads 289
465 Structural Analysis and Strengthening of the National Youth Foundation Building in Igoumenitsa, Greece

Authors: Chrysanthos Maraveas, Argiris Plesias, Garyfalia G. Triantafyllou, Konstantinos Petronikolos

Abstract:

The current paper presents a structural assessment and proposals for retrofit of the National Youth Foundation Building, an existing reinforced concrete (RC) building in the city of Igoumenitsa, Greece. The building is scheduled to be renovated in order to create a Municipal Cultural Center. The bearing capacity and structural integrity have been investigated in relation to the provisions and requirements of the Greek Retrofitting Code (KAN.EPE.) and European Standards (Eurocodes). The capacity of the existing concrete structure that makes up the two central buildings in the complex (buildings II and IV) has been evaluated both in its present form and after including several proposed architectural interventions. The structural system consists of spatial frames of columns and beams that have been simulated using beam elements. Some RC elements of the buildings have been strengthened in the past by means of concrete jacketing and have had cracks sealed with epoxy injections. Static-nonlinear analysis (Pushover) has been used to assess the seismic performance of the two structures with regard to performance level B1 from KAN.EPE. Retrofitting scenarios are proposed for the two buildings, including type Λ steel bracings and placement of concrete shear walls in the transverse direction in order to achieve the design-specification deformation in each applicable situation, improve the seismic performance, and reduce the number of interventions required.

Keywords: earthquake resistance, pushover analysis, reinforced concrete, retrofit, strengthening

Procedia PDF Downloads 290
464 Combination of Diane-35 and Metformin to Treat Early Endometrial Carcinoma in PCOS Women with Insulin Resistance

Authors: Xin Li, Yan-Rong Guo, Jin-Fang Lin, Yi Feng, Håkan Billig, Ruijin Shao

Abstract:

Background: Young women with polycystic ovary syndrome (PCOS) have a high risk of developing endometrial carcinoma. There is a need for the development of new medical therapies that can reduce the need for surgical intervention so as to preserve the fertility of these patients. The aim of the study was to describe and discuss cases of PCOS and insulin resistance (IR) women with early endometrial carcinoma while being co-treated with Diane-35 and metformin. Methods: Five PCOS-IR women who were scheduled for diagnosis and therapy for early endometrial carcinoma were recruited. The hospital records and endometrial pathology reports were reviewed. All patients were co-treated with Diane-35 and metformin for 6 months to reverse the endometrial carcinoma and preserve their fertility. Before, during, and after treatment, endometrial biopsies and blood samples were obtained and oral glucose tolerance tests were performed. Endometrial pathology was evaluated. Body weight (BW), body mass index (BMI), follicle-stimulating hormone (FSH), luteinizing hormone (LH), total testosterone (TT), sex hormone-binding globulin (SHBG), free androgen index (FAI), insulin area under curve (IAUC), and homeostasis model assessment of insulin resistance (HOMA-IR) were determined. Results: Clinical stage 1a, low grade endometrial carcinoma was confirmed before treatment. After 6 months of co-treatment, all patients showed normal epithelia. No evidence of atypical hyperplasia or endometrial carcinoma was found. Co-treatment resulted in significant decreases in BW, BMI, TT, FAI, IAUC, and HOMA-IR in parallel with a significant increase in SHBG. There were no differences in the FSH and LH levels after co-treatment. Conclusions: Combined treatment with Diane-35 and metformin has the potential to revert the endometrial carcinoma into normal endometrial cells in PCOS-IR women. The cellular and molecular mechanisms behind this effect merit further investigation.

Keywords: PCOS, progesterone resistance, insulin resistance, steroid hormone receptors, endometrial carcinoma

Procedia PDF Downloads 404
463 Robust Shrinkage Principal Component Parameter Estimator for Combating Multicollinearity and Outliers’ Problems in a Poisson Regression Model

Authors: Arum Kingsley Chinedu, Ugwuowo Fidelis Ifeanyi, Oranye Henrietta Ebele

Abstract:

The Poisson regression model (PRM) is a nonlinear model that belongs to the exponential family of distribution. PRM is suitable for studying count variables using appropriate covariates and sometimes experiences the problem of multicollinearity in the explanatory variables and outliers on the response variable. This study aims to address the problem of multicollinearity and outliers jointly in a Poisson regression model. We developed an estimator called the robust modified jackknife PCKL parameter estimator by combining the principal component estimator, modified jackknife KL and transformed M-estimator estimator to address both problems in a PRM. The superiority conditions for this estimator were established, and the properties of the estimator were also derived. The estimator inherits the characteristics of the combined estimators, thereby making it efficient in addressing both problems. And will also be of immediate interest to the research community and advance this study in terms of novelty compared to other studies undertaken in this area. The performance of the estimator (robust modified jackknife PCKL) with other existing estimators was compared using mean squared error (MSE) as a performance evaluation criterion through a Monte Carlo simulation study and the use of real-life data. The results of the analytical study show that the estimator outperformed other existing estimators compared with by having the smallest MSE across all sample sizes, different levels of correlation, percentages of outliers and different numbers of explanatory variables.

Keywords: jackknife modified KL, outliers, multicollinearity, principal component, transformed M-estimator.

Procedia PDF Downloads 61
462 Pharmacokinetic Modeling of Valsartan in Dog following a Single Oral Administration

Authors: In-Hwan Baek

Abstract:

Valsartan is a potent and highly selective antagonist of the angiotensin II type 1 receptor, and is widely used for the treatment of hypertension. The aim of this study was to investigate the pharmacokinetic properties of the valsartan in dogs following oral administration of a single dose using quantitative modeling approaches. Forty beagle dogs were randomly divided into two group. Group A (n=20) was administered a single oral dose of valsartan 80 mg (Diovan® 80 mg), and group B (n=20) was administered a single oral dose of valsartan 160 mg (Diovan® 160 mg) in the morning after an overnight fast. Blood samples were collected into heparinized tubes before and at 0.5, 1, 1.5, 2, 2.5, 3, 4, 6, 8, 12 and 24 h following oral administration. The plasma concentrations of the valsartan were determined using LC-MS/MS. Non-compartmental pharmacokinetic analyses were performed using WinNonlin Standard Edition software, and modeling approaches were performed using maximum-likelihood estimation via the expectation maximization (MLEM) algorithm with sampling using ADAPT 5 software. After a single dose of valsartan 80 mg, the mean value of maximum concentration (Cmax) was 2.68 ± 1.17 μg/mL at 1.83 ± 1.27 h. The area under the plasma concentration-versus-time curve from time zero to the last measurable concentration (AUC24h) value was 13.21 ± 6.88 μg·h/mL. After dosing with valsartan 160 mg, the mean Cmax was 4.13 ± 1.49 μg/mL at 1.80 ± 1.53 h, the AUC24h was 26.02 ± 12.07 μg·h/mL. The Cmax and AUC values increased in proportion to the increment in valsartan dose, while the pharmacokinetic parameters of elimination rate constant, half-life, apparent of total clearance, and apparent of volume of distribution were not significantly different between the doses. Valsartan pharmacokinetic analysis fits a one-compartment model with first-order absorption and elimination following a single dose of valsartan 80 mg and 160 mg. In addition, high inter-individual variability was identified in the absorption rate constant. In conclusion, valsartan displays the dose-dependent pharmacokinetics in dogs, and Subsequent quantitative modeling approaches provided detailed pharmacokinetic information of valsartan. The current findings provide useful information in dogs that will aid future development of improved formulations or fixed-dose combinations.

Keywords: dose-dependent, modeling, pharmacokinetics, valsartan

Procedia PDF Downloads 295
461 Predictive Value of Hepatitis B Core-Related Antigen (HBcrAg) during Natural History of Hepatitis B Virus Infection

Authors: Yanhua Zhao, Yu Gou, Shu Feng, Dongdong Li, Chuanmin Tao

Abstract:

The natural history of HBV infection could experience immune tolerant (IT), immune clearance (IC), HBeAg-negative inactive/quienscent carrier (ENQ), and HBeAg-negative hepatitis (ENH). As current biomarkers for discriminating these four phases have some weaknesses, additional serological indicators are needed. Hepatits B core-related antigen (HBcrAg) encoded with precore/core gene contains denatured HBeAg, HBV core antigen (HBcAg) and a 22KDa precore protein (p22cr), which was demonstrated to have a close association with natural history of hepatitis B infection, but no specific cutoff values and diagnostic parameters to evaluate the diagnostic efficacy. This study aimed to clarify the distribution of HBcrAg levels and evaluate its diagnostic performance during the natural history of infection from a Western Chinese perspective. 294 samples collected from treatment-naïve chronic hepatitis B (CHB) patients in different phases (IT=64; IC=72; ENQ=100, and ENH=58). We detected the HBcrAg values and analyzed the relationship between HBcrAg and HBV DNA. HBsAg and other clinical parameters were quantitatively tested. HBcrAg levels of four phases were 9.30 log U/mL, 8.80 log U/mL, 3.00 log U/mL, and 5.10 logU/mL, respectively (p < 0.0001). Receiver operating characteristic curve analysis demonstrated that the area under curves (AUCs) of HBcrAg and quantitative HBsAg at cutoff values of 9.25 log U/mL and 4.355 log IU/mL for distinguishing IT from IC phases were 0.704 and 0.694, with sensitivity 76.39% and 59.72%, specificity 53.13% and 79.69%, respectively. AUCs of HBcrAg and quantitative HBsAg at cutoff values of 4.15 log U/mlmL and 2.395 log IU/mlmL for discriminating between ENQ and ENH phases were 0.931 and 0.653, with sensitivity 87.93% and 84%, specificity 91.38% and 39%, respectively. Therefore, HBcrAg levels varied significantly among four natural phases of HBV infection. It had higher predictive performance than quantitative HBsAg for distinguishing between ENQ-patients and ENH-patients and similar performance with HBsAg for the discrimination between IT and IC phases, which indicated that HBcrAg could be a potential serological marker for CHB.

Keywords: chronic hepatitis B, hepatitis B core-related antigen, hepatitis B surface antigens, hepatitis B virus

Procedia PDF Downloads 411
460 Parameters Affecting the Elasto-Plastic Behavior of Outrigger Braced Walls to Earthquakes

Authors: T. A. Sakr, Hanaa E. Abd-El-Mottaleb

Abstract:

Outrigger-braced wall systems are commonly used to provide high rise buildings with the required lateral stiffness for wind and earthquake resistance. The existence of outriggers adds to the stiffness and strength of walls as reported by several studies. The effects of different parameters on the elasto-plastic dynamic behavior of outrigger-braced wall systems to earthquakes are investigated in this study. Parameters investigated include outrigger stiffness, concrete strength, and reinforcement arrangement as the main design parameters in wall design. In addition to being significant to the wall behavior, such parameters may lead to the change of failure mode and the delay of crack propagation and consequently failure as the wall is excited by earthquakes. Bi-linear stress-strain relation for concrete with limited tensile strength and truss members with bi-linear stress-strain relation for reinforcement were used in the finite element analysis of the problem. The famous earthquake record, El-Centro, 1940 is used in the study. Emphasis was given to the lateral drift, normal stresses and crack pattern as behavior controlling determinants. Results indicated significant effect of the studied parameters such that stiffer outrigger, higher grade concrete and concentrating the reinforcement at wall edges enhance the behavior of the system. Concrete stresses and cracking behavior are sigbificantly enhanced while lesser drift improvements are observed.

Keywords: outrigger, shear wall, earthquake, nonlinear

Procedia PDF Downloads 278
459 Magneto-Transport of Single Molecular Transistor Using Anderson-Holstein-Caldeira-Leggett Model

Authors: Manasa Kalla, Narasimha Raju Chebrolu, Ashok Chatterjee

Abstract:

We have studied the quantum transport properties of a single molecular transistor in the presence of an external magnetic field using the Keldysh Green function technique. We also used the Anderson-Holstein-Caldeira-Leggett Model to describe the single molecular transistor that consists of a molecular quantum dot (QD) coupled to two metallic leads and placed on a substrate that acts as a heat bath. The phonons are eliminated by the Lang-Firsov transformation and the effective Hamiltonian is used to study the effect of an external magnetic field on the spectral density function, Tunneling Current, Differential Conductance and Spin polarization. A peak in the spectral function corresponds to a possible excitation. In the presence of a magnetic field, the spin-up and spin-down states are degenerate and this degeneracy is lifted by the magnetic field leading to the splitting of the central peak of the spectral function. The tunneling current decreases with increasing magnetic field. We have observed that even the differential conductance peak in the zero magnetic field curve is split in the presence electron-phonon interaction. As the magnetic field is increased, each peak splits into two peaks. And each peak indicates the existence of an energy level. Thus the number of energy levels for transport in the bias window increases with the magnetic field. In the presence of the electron-phonon interaction, Differential Conductance in general gets reduced and decreases faster with the magnetic field. As magnetic field strength increases, the spin polarization of the current is increasing. Our results show that a strongly interacting QD coupled to metallic leads in the presence of external magnetic field parallel to the plane of QD acts as a spin filter at zero temperature.

Keywords: Anderson-Holstein model, Caldeira-Leggett model, spin-polarization, quantum dots

Procedia PDF Downloads 177
458 On Stochastic Models for Fine-Scale Rainfall Based on Doubly Stochastic Poisson Processes

Authors: Nadarajah I. Ramesh

Abstract:

Much of the research on stochastic point process models for rainfall has focused on Poisson cluster models constructed from either the Neyman-Scott or Bartlett-Lewis processes. The doubly stochastic Poisson process provides a rich class of point process models, especially for fine-scale rainfall modelling. This paper provides an account of recent development on this topic and presents the results based on some of the fine-scale rainfall models constructed from this class of stochastic point processes. Amongst the literature on stochastic models for rainfall, greater emphasis has been placed on modelling rainfall data recorded at hourly or daily aggregation levels. Stochastic models for sub-hourly rainfall are equally important, as there is a need to reproduce rainfall time series at fine temporal resolutions in some hydrological applications. For example, the study of climate change impacts on hydrology and water management initiatives requires the availability of data at fine temporal resolutions. One approach to generating such rainfall data relies on the combination of an hourly stochastic rainfall simulator, together with a disaggregator making use of downscaling techniques. Recent work on this topic adopted a different approach by developing specialist stochastic point process models for fine-scale rainfall aimed at generating synthetic precipitation time series directly from the proposed stochastic model. One strand of this approach focused on developing a class of doubly stochastic Poisson process (DSPP) models for fine-scale rainfall to analyse data collected in the form of rainfall bucket tip time series. In this context, the arrival pattern of rain gauge bucket tip times N(t) is viewed as a DSPP whose rate of occurrence varies according to an unobserved finite state irreducible Markov process X(t). Since the likelihood function of this process can be obtained, by conditioning on the underlying Markov process X(t), the models were fitted with maximum likelihood methods. The proposed models were applied directly to the raw data collected by tipping-bucket rain gauges, thus avoiding the need to convert tip-times to rainfall depths prior to fitting the models. One advantage of this approach was that the use of maximum likelihood methods enables a more straightforward estimation of parameter uncertainty and comparison of sub-models of interest. Another strand of this approach employed the DSPP model for the arrivals of rain cells and attached a pulse or a cluster of pulses to each rain cell. Different mechanisms for the pattern of the pulse process were used to construct variants of this model. We present the results of these models when they were fitted to hourly and sub-hourly rainfall data. The results of our analysis suggest that the proposed class of stochastic models is capable of reproducing the fine-scale structure of the rainfall process, and hence provides a useful tool in hydrological modelling.

Keywords: fine-scale rainfall, maximum likelihood, point process, stochastic model

Procedia PDF Downloads 270
457 Evaluating the Seismic Stress Distribution in the High-Rise Structures Connections with Optimal Bracing System

Authors: H. R. Vosoughifar, Seyedeh Zeinab. Hosseininejad, Nahid Shabazi, Seyed Mohialdin Hosseininejad

Abstract:

In recent years, structure designers advocate further application of energy absorption devices for lateral loads damping. The Un-bonded Braced Frame (UBF) system is one of the efficient damping systems, which is made of a smart combination of steel and concrete or mortar. In this system, steel bears the earthquake-induced axial force as compressive or tension forces without loss of strength. Concrete or mortar around the steel core acts as a constraint for brace and prevents brace buckling during seismic axial load. In this study, the optimal bracing system in the high-rise structures has been evaluated considering the seismic stress distribution in the connections. An actual 18-story structure was modeled using the proper Finite Element (FE) software where braced with UBF, Eccentrically Braced Frames (EBF) and Concentrically Braced Frame (CBF) systems. Nonlinear static pushover and time-history analyses are then performed so that the acquired results demonstrate that the UBF system reduces drift values in the high-rise buildings. Further statistical analyses show that there is a significant difference between the drift values of UBF system compared with those resulted from the EBF and CBF systems. Hence, the seismic stress distribution in the connections of the proposed structure which braced with UBF system was investigated.

Keywords: optimal bracing system, high-rise structure, finite element analysis (FEA), seismic stress

Procedia PDF Downloads 425
456 Control of Base Isolated Benchmark using Combined Control Strategy with Fuzzy Algorithm Subjected to Near-Field Earthquakes

Authors: Hashem Shariatmadar, Mozhgansadat Momtazdargahi

Abstract:

The purpose of control structure against earthquake is to dissipate earthquake input energy to the structure and reduce the plastic deformation of structural members. There are different methods for control structure against earthquake to reduce the structure response that they are active, semi-active, inactive and hybrid. In this paper two different combined control systems are used first system comprises base isolator and multi tuned mass dampers (BI & MTMD) and another combination is hybrid base isolator and multi tuned mass dampers (HBI & MTMD) for controlling an eight story isolated benchmark steel structure. Active control force of hybrid isolator is estimated by fuzzy logic algorithms. The influences of the combined systems on the responses of the benchmark structure under the two near-field earthquake (Newhall & Elcentro) are evaluated by nonlinear dynamic time history analysis. Applications of combined control systems consisting of passive or active systems installed in parallel to base-isolation bearings have the capability of reducing response quantities of base-isolated (relative and absolute displacement) structures significantly. Therefore in design and control of irregular isolated structures using the proposed control systems, structural demands (relative and absolute displacement and etc.) in each direction must be considered separately.

Keywords: base-isolated benchmark structure, multi-tuned mass dampers, hybrid isolators, near-field earthquake, fuzzy algorithm

Procedia PDF Downloads 298
455 Substantial Fatigue Similarity of a New Small-Scale Test Rig to Actual Wheel-Rail System

Authors: Meysam Naeimi, Zili Li, Roumen Petrov, Rolf Dollevoet, Jilt Sietsma, Jun Wu

Abstract:

The substantial similarity of fatigue mechanism in a new test rig for rolling contact fatigue (RCF) has been investigated. A new reduced-scale test rig is designed to perform controlled RCF tests in wheel-rail materials. The fatigue mechanism of the rig is evaluated in this study using a combined finite element-fatigue prediction approach. The influences of loading conditions on fatigue crack initiation have been studied. Furthermore, the effects of some artificial defects (squat-shape) on fatigue lives are examined. To simulate the vehicle-track interaction by means of the test rig, a three-dimensional finite element (FE) model is built up. The nonlinear material behaviour of the rail steel is modelled in the contact interface. The results of FE simulations are combined with the critical plane concept to determine the material points with the greatest possibility of fatigue failure. Based on the stress-strain responses, by employing of previously postulated criteria for fatigue crack initiation (plastic shakedown and ratchetting), fatigue life analysis is carried out. The results are reported for various loading conditions and different defect sizes. Afterward, the cyclic mechanism of the test rig is evaluated from the operational viewpoint. The results of fatigue life predictions are compared with the expected number of cycles of the test rig by its cyclic nature. Finally, the estimative duration of the experiments until fatigue crack initiation is roughly determined.

Keywords: fatigue, test rig, crack initiation, life, rail, squats

Procedia PDF Downloads 509
454 An Application of Integrated Multi-Objective Particles Swarm Optimization and Genetic Algorithm Metaheuristic through Fuzzy Logic for Optimization of Vehicle Routing Problems in Sugar Industry

Authors: Mukhtiar Singh, Sumeet Nagar

Abstract:

Vehicle routing problem (VRP) is a combinatorial optimization and nonlinear programming problem aiming to optimize decisions regarding given set of routes for a fleet of vehicles in order to provide cost-effective and efficient delivery of both services and goods to the intended customers. This paper proposes the application of integrated particle swarm optimization (PSO) and genetic optimization algorithm (GA) to address the Vehicle routing problem in sugarcane industry in India. Suger industry is very prominent agro-based industry in India due to its impacts on rural livelihood and estimated to be employing around 5 lakhs workers directly in sugar mills. Due to various inadequacies, inefficiencies and inappropriateness associated with the current vehicle routing model it costs huge money loss to the industry which needs to be addressed in proper context. The proposed algorithm utilizes the crossover operation that originally appears in genetic algorithm (GA) to improve its flexibility and manipulation more readily and avoid being trapped in local optimum, and simultaneously for improving the convergence speed of the algorithm, level set theory is also added to it. We employ the hybrid approach to an example of VRP and compare its result with those generated by PSO, GA, and parallel PSO algorithms. The experimental comparison results indicate that the performance of hybrid algorithm is superior to others, and it will become an effective approach for solving discrete combinatory problems.

Keywords: fuzzy logic, genetic algorithm, particle swarm optimization, vehicle routing problem

Procedia PDF Downloads 392
453 Robust Heart Rate Estimation from Multiple Cardiovascular and Non-Cardiovascular Physiological Signals Using Signal Quality Indices and Kalman Filter

Authors: Shalini Rankawat, Mansi Rankawat, Rahul Dubey, Mazad Zaveri

Abstract:

Physiological signals such as electrocardiogram (ECG) and arterial blood pressure (ABP) in the intensive care unit (ICU) are often seriously corrupted by noise, artifacts, and missing data, which lead to errors in the estimation of heart rate (HR) and incidences of false alarm from ICU monitors. Clinical support in ICU requires most reliable heart rate estimation. Cardiac activity, because of its relatively high electrical energy, may introduce artifacts in Electroencephalogram (EEG), Electrooculogram (EOG), and Electromyogram (EMG) recordings. This paper presents a robust heart rate estimation method by detection of R-peaks of ECG artifacts in EEG, EMG & EOG signals, using energy-based function and a novel Signal Quality Index (SQI) assessment technique. SQIs of physiological signals (EEG, EMG, & EOG) were obtained by correlation of nonlinear energy operator (teager energy) of these signals with either ECG or ABP signal. HR is estimated from ECG, ABP, EEG, EMG, and EOG signals from separate Kalman filter based upon individual SQIs. Data fusion of each HR estimate was then performed by weighing each estimate by the Kalman filters’ SQI modified innovations. The fused signal HR estimate is more accurate and robust than any of the individual HR estimate. This method was evaluated on MIMIC II data base of PhysioNet from bedside monitors of ICU patients. The method provides an accurate HR estimate even in the presence of noise and artifacts.

Keywords: ECG, ABP, EEG, EMG, EOG, ECG artifacts, Teager-Kaiser energy, heart rate, signal quality index, Kalman filter, data fusion

Procedia PDF Downloads 692
452 Process Monitoring Based on Parameterless Self-Organizing Map

Authors: Young Jae Choung, Seoung Bum Kim

Abstract:

Statistical Process Control (SPC) is a popular technique for process monitoring. A widely used tool in SPC is a control chart, which is used to detect the abnormal status of a process and maintain the controlled status of the process. Traditional control charts, such as Hotelling’s T2 control chart, are effective techniques to detect abnormal observations and monitor processes. However, many complicated manufacturing systems exhibit nonlinearity because of the different demands of the market. In this case, the unregulated use of a traditional linear modeling approach may not be effective. In reality, many industrial processes contain the nonlinear and time-varying properties because of the fluctuation of process raw materials, slowing shift of the set points, aging of the main process components, seasoning effects, and catalyst deactivation. The use of traditional SPC techniques with time-varying data will degrade the performance of the monitoring scheme. To address these issues, in the present study, we propose a parameterless self-organizing map (PLSOM)-based control chart. The PLSOM-based control chart not only can manage a situation where the distribution or parameter of the target observations changes, but also address the nonlinearity of modern manufacturing systems. The control limits of the proposed PLSOM chart are established by estimating the empirical level of significance on the percentile using a bootstrap method. Experimental results with simulated data and actual process data from a thin-film transistor-liquid crystal display process demonstrated the effectiveness and usefulness of the proposed chart.

Keywords: control chart, parameter-less self-organizing map, self-organizing map, time-varying property

Procedia PDF Downloads 267
451 Ethyl Methane Sulfonate-Induced Dunaliella salina KU11 Mutants Affected for Growth Rate, Cell Accumulation and Biomass

Authors: Vongsathorn Ngampuak, Yutachai Chookaew, Wipawee Dejtisakdi

Abstract:

Dunaliella salina has great potential as a system for generating commercially valuable products, including beta-carotene, pharmaceuticals, and biofuels. Our goal is to improve this potential by enhancing growth rate and other properties of D. salina under optimal growth conditions. We used ethyl methane sulfonate (EMS) to generate random mutants in D. salina KU11, a strain classified in Thailand. In a preliminary experiment, we first treated D. salina cells with 0%, 0.8%, 1.0%, 1.2%, 1.44% and 1.66% EMS to generate a killing curve. After that, we randomly picked 30 candidates from approximately 300 isolated survivor colonies from the 1.44% EMS treatment (which permitted 30% survival) as an initial test of the mutant screen. Among the 30 survivor lines, we found that 2 strains (mutant #17 and #24) had significantly improved growth rates and cell number accumulation at stationary phase approximately up to 1.8 and 1.45 fold, respectively, 2 strains (mutant #6 and #23) had significantly decreased growth rates and cell number accumulation at stationary phase approximately down to 1.4 and 1.35 fold, respectively, while 26 of 30 lines had similar growth rates compared with the wild type control. We also analyzed cell size for each strain and found there was no significant difference comparing all mutants with the wild type. In addition, mutant #24 had shown an increase of biomass accumulation approximately 1.65 fold compared with the wild type strain on day 5 that was entering early stationary phase. From these preliminary results, it could be feasible to identify D. salina mutants with significant improved growth rate, cell accumulation and biomass production compared to the wild type for the further study; this makes it possible to improve this microorganism as a platform for biotechnology application.

Keywords: Dunaliella salina, ethyl methyl sulfonate, growth rate, biomass

Procedia PDF Downloads 237
450 A Sensor Placement Methodology for Chemical Plants

Authors: Omid Ataei Nia, Karim Salahshoor

Abstract:

In this paper, a new precise and reliable sensor network methodology is introduced for unit processes and operations using the Constriction Coefficient Particle Swarm Optimization (CPSO) method. CPSO is introduced as a new search engine for optimal sensor network design purposes. Furthermore, a Square Root Unscented Kalman Filter (SRUKF) algorithm is employed as a new data reconciliation technique to enhance the stability and accuracy of the filter. The proposed design procedure incorporates precision, cost, observability, reliability together with importance-of-variables (IVs) as a novel measure in Instrumentation Criteria (IC). To the best of our knowledge, no comprehensive approach has yet been proposed in the literature to take into account the importance of variables in the sensor network design procedure. In this paper, specific weight is assigned to each sensor, measuring a process variable in the sensor network to indicate the importance of that variable over the others to cater to the ultimate sensor network application requirements. A set of distinct scenarios has been conducted to evaluate the performance of the proposed methodology in a simulated Continuous Stirred Tank Reactor (CSTR) as a highly nonlinear process plant benchmark. The obtained results reveal the efficacy of the proposed method, leading to significant improvement in accuracy with respect to other alternative sensor network design approaches and securing the definite allocation of sensors to the most important process variables in sensor network design as a novel achievement.

Keywords: constriction coefficient PSO, importance of variable, MRMSE, reliability, sensor network design, square root unscented Kalman filter

Procedia PDF Downloads 157
449 A Non-Linear Eddy Viscosity Model for Turbulent Natural Convection in Geophysical Flows

Authors: J. P. Panda, K. Sasmal, H. V. Warrior

Abstract:

Eddy viscosity models in turbulence modeling can be mainly classified as linear and nonlinear models. Linear formulations are simple and require less computational resources but have the disadvantage that they cannot predict actual flow pattern in complex geophysical flows where streamline curvature and swirling motion are predominant. A constitutive equation of Reynolds stress anisotropy is adopted for the formulation of eddy viscosity including all the possible higher order terms quadratic in the mean velocity gradients, and a simplified model is developed for actual oceanic flows where only the vertical velocity gradients are important. The new model is incorporated into the one dimensional General Ocean Turbulence Model (GOTM). Two realistic oceanic test cases (OWS Papa and FLEX' 76) have been investigated. The new model predictions match well with the observational data and are better in comparison to the predictions of the two equation k-epsilon model. The proposed model can be easily incorporated in the three dimensional Princeton Ocean Model (POM) to simulate a wide range of oceanic processes. Practically, this model can be implemented in the coastal regions where trasverse shear induces higher vorticity, and for prediction of flow in estuaries and lakes, where depth is comparatively less. The model predictions of marine turbulence and other related data (e.g. Sea surface temperature, Surface heat flux and vertical temperature profile) can be utilized in short term ocean and climate forecasting and warning systems.

Keywords: Eddy viscosity, turbulence modeling, GOTM, CFD

Procedia PDF Downloads 196
448 Non-Linear Regression Modeling for Composite Distributions

Authors: Mostafa Aminzadeh, Min Deng

Abstract:

Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.

Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions

Procedia PDF Downloads 26
447 The Effect of Excel on Undergraduate Students’ Understanding of Statistics and the Normal Distribution

Authors: Masomeh Jamshid Nejad

Abstract:

Nowadays, statistical literacy is no longer a necessary skill but an essential skill with broad applications across diverse fields, especially in operational decision areas such as business management, finance, and economics. As such, learning and deep understanding of statistical concepts are essential in the context of business studies. One of the crucial topics in statistical theory and its application is the normal distribution, often called a bell-shaped curve. To interpret data and conduct hypothesis tests, comprehending the properties of normal distribution (the mean and standard deviation) is essential for business students. This requires undergraduate students in the field of economics and business management to visualize and work with data following a normal distribution. Since technology is interconnected with education these days, it is important to teach statistics topics in the context of Python, R-studio, and Microsoft Excel to undergraduate students. This research endeavours to shed light on the effect of Excel-based instruction on learners’ knowledge of statistics, specifically the central concept of normal distribution. As such, two groups of undergraduate students (from the Business Management program) were compared in this research study. One group underwent Excel-based instruction and another group relied only on traditional teaching methods. We analyzed experiential data and BBA participants’ responses to statistic-related questions focusing on the normal distribution, including its key attributes, such as the mean and standard deviation. The results of our study indicate that exposing students to Excel-based learning supports learners in comprehending statistical concepts more effectively compared with the other group of learners (teaching with the traditional method). In addition, students in the context of Excel-based instruction showed ability in picturing and interpreting data concentrated on normal distribution.

Keywords: statistics, excel-based instruction, data visualization, pedagogy

Procedia PDF Downloads 50
446 By Removing High-Performance Aerobic Scope Phenotypes, Capture Fisheries May Reduce the Resilience of Fished Populations to Thermal Variability and Compromise Their Persistence into the Anthropocene.

Authors: Lauren A. Bailey, Amber R. Childs, Nicola C. James, Murray I. Duncan, Alexander Winkler, Warren M. Potts

Abstract:

For the persistence of fished populations in the Anthropocene, it is critical to predict how fished populations will respond to the coupled threats of exploitation and climate change for adaptive management. The resilience of fished populations will depend on their capacity for physiological plasticity and acclimatization in response to environmental shifts. However, there is evidence for the selection of physiological traits by capture fisheries. Hence, fish populations may have a limited scope for the rapid expansion of their tolerance ranges or physiological adaptation under fishing pressures. To determine the physiological vulnerability of fished populations in the Anthropocene, the metabolic performance was compared between a fished and spatially protected Chrysoblephus laticeps population in response to thermal variability. Individual aerobic scope phenotypes were quantified using intermittent flow respirometry by comparing changes in energy expenditure of each individual at ecologically relevant temperatures, mimicking variability experienced as a result of upwelling and downwelling events. The proportion of high and low-performance individuals were compared between the fished and spatially protected population. The fished population had limited aerobic scope phenotype diversity and fewer high-performance phenotypes, resulting in a significantly lower aerobic scope curve across low (10 °C) and high (24 °C) thermal treatments. The performance of fished populations may be compromised with predicted future increases in cold upwelling events. This requires the conservation of the physiologically fittest individuals in spatially protected areas, which can recruit into nearby fished areas, as a climate resilience tool.

Keywords: climate change, fish physiology, metabolic shifts, over-fishing, respirometry

Procedia PDF Downloads 126