Search results for: simulated
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1683

Search results for: simulated

1503 Influential Health Care System Rankings Can Conceal Maximal Inequities: A Simulation Study

Authors: Samuel Reisman

Abstract:

Background: Comparative rankings are increasingly used to evaluate health care systems. These rankings combine discrete attribute rankings into a composite overall ranking. Health care equity is a component of overall rankings, but excelling in other categories can counterbalance low inequity grades. Highly ranked inequitable health care would commend systems that disregard human rights. We simulated the ranking of a maximally inequitable health care system using a published, influential ranking methodology. Methods: We used The Commonwealth Fund’s ranking of eleven health care systems to simulate the rank of a maximally inequitable system. Eighty performance indicators were simulated, assuming maximal ineptitude in equity benchmarks. Maximal rankings in all non-equity subcategories were assumed. Subsequent stepwise simulations lowered all non-equity rank positions by one. Results: The maximally non-equitable health care system ranked first overall. Three subsequent stepwise simulations, lowering non-equity rankings by one, each resulted in an overall ranking within the top three. Discussion: Our results demonstrate that grossly inequitable health care systems can rank highly in comparative health care system rankings. These findings challenge the validity of ranking methodologies that subsume equity under broader benchmarks. We advocate limiting maximum overall rankings of health care systems to their individual equity rankings. Such limits are logical given the insignificance of health care system improvements to those lacking adequate health care.

Keywords: global health, health equity, healthcare systems, international health

Procedia PDF Downloads 370
1502 Using Interval Type-2 Fuzzy Controller for Diabetes Mellitus

Authors: Nafiseh Mollaei, Reihaneh Kardehi Moghaddam

Abstract:

In case of Diabetes Mellitus the controlling of insulin is very difficult. This illness is an incurable disease affecting millions of people worldwide. Glucose is a sugar which provides energy to the cells. Insulin is a hormone which supports the absorption of glucose. Fuzzy control strategy is attractive for glucose control because it mimics the first and second phase responses that the pancreas beta cells use to control glucose. We propose two control algorithms a type-1 fuzzy controller and an interval type-2 fuzzy method for the insulin infusion. The closed loop system has been simulated for different patients with different parameters, in present of the food intake disturbance and it has been shown that the blood glucose concentrations at a normoglycemic level of 110 mg/dl in the reasonable amount of time. This paper deals with type 1 diabetes as a nonlinear model, which has been simulated in MATLAB-SIMULINK environment. The novel model, termed the Augmented Minimal Model is used in the simulations. There are some uncertainties in this model due to factors such as blood glucose, daily meals or sudden stress. In addition to eliminate the effects of uncertainty, different control methods may be utilized. In this article, fuzzy controller performance were assessed in terms of its ability to track a normoglycemic set point (110 mg/dl) in response to a [0-10] g meal disturbance. Finally, the development reported in this paper is supposed to simplify the insulin delivery, so increasing the quality of life of the patient.

Keywords: interval type-2, fuzzy controller, minimal augmented model, uncertainty

Procedia PDF Downloads 401
1501 Hydrological Evaluation of Satellite Precipitation Products Using IHACRES Rainfall-Runoff Model over a Basin in Iran

Authors: Mahmoud Zakeri Niri, Saber Moazami, Arman Abdollahipour, Hossein Ghalkhani

Abstract:

The objective of this research is to hydrological evaluation of four widely-used satellite precipitation products named PERSIANN, TMPA-3B42V7, TMPA-3B42RT, and CMORPH over Zarinehrood basin in Iran. For this aim, at first, daily streamflow of Sarough-cahy river of Zarinehrood basin was simulated using IHACRES rainfall-runoff model with daily rain gauge and temperature as input data from 1988 to 2008. Then, the model was calibrated in two different periods through comparison the simulated discharge with the observed one at hydrometric stations. Moreover, in order to evaluate the performance of satellite precipitation products in streamflow simulation, the calibrated model was validated using daily satellite rainfall estimates from the period of 2003 to 2008. The obtained results indicated that TMPA-3B42V7 with CC of 0.69, RMSE of 5.93 mm/day, MAE of 4.76 mm/day, and RBias of -5.39% performs better simulation of streamflow than those PERSIANN and CMORPH over the study area. It is noteworthy that in Iran, the availability of ground measuring station data is very limited because of the sparse density of hydro-meteorological networks. On the other hand, large spatial and temporal variability of precipitations and lack of a reliable and extensive observing system are the most important challenges to rainfall analysis, flood prediction, and other hydrological applications in this country.

Keywords: hydrological evaluation, IHACRES, satellite precipitation product, streamflow simulation

Procedia PDF Downloads 210
1500 Biosorption of Manganese Mine Effluents Using Crude Chitin from Philippine Bivalves

Authors: Randy Molejona Jr., Elaine Nicole Saquin

Abstract:

The area around the Ajuy river in Iloilo, Philippines, is currently being mined for manganese ore, and river water samples exceed the maximum manganese contaminant level set by US-EPA. At the same time, the surplus of local bivalve waste is another environmental concern. Synthetic chemical treatment compromises water quality, leaving toxic residues. Therefore, an alternative treatment process is biosorption or using the physical and chemical properties of biomass to adsorb heavy metals in contaminated water. The study aims to extract crude chitin from shell wastes of Bractechlamys vexillum, Perna viridis, and Placuna placenta and determine its adsorption capacity on manganese in simulated and actual mine water. Crude chitin was obtained by pulverization, deproteinization, demineralization, and decolorization of shells. Biosorption by flocculation followed 5 g: 50 mL chitin-to-water ratio. Filtrates were analyzed using MP-AES after 24 hours. In both actual and simulated mine water, respectively, B. vexillum yielded the highest adsorption percentage of 91.43% and 99.58%, comparable to P. placenta of 91.43% and 99.37%, while significantly different to P. viridis of -57.14% and 31.53%, (p < 0.05). FT-IR validated the presence of chitin in shells based on carbonyl-containing functional groups at peaks 1530-1560 cm⁻¹ and 1660-1680 cm⁻¹. SEM micrographs showed the amorphous and non-homogenous structure of chitin. Thus, crude chitin from B. vexillum and P. placenta can be bio-sorbents for water treatment of manganese-impacted effluents, and promote appropriate waste management of local bivalves.

Keywords: biosorption, chitin, FT-IR, mine effluents, SEM

Procedia PDF Downloads 162
1499 Studies of the Corrosion Kinetics of Metal Alloys in Stagnant Simulated Seawater Environment

Authors: G. Kabir, A. M. Mohammed, M. A. Bawa

Abstract:

The paper presents corrosion behaviors of Naval Brass, aluminum alloy and carbon steel in simulated seawater under stagnant conditions. The behaviors were characterized on the variation of chloride ions concentration in the range of 3.0wt% and 3.5wt% and exposure time. The weight loss coupon-method immersion technique was employed. The weight loss for the various alloys was measured. Based on the obtained results, the corrosion rate was determined. It was found that the corrosion rates of the various alloys are related to the chloride ions concentrations, exposure time and kinetics of passive film formation of the various alloys. Carbon steel, suffers corrosion many folds more than Naval Brass. This indicated that the alloy exhibited relatively strong resistance to corrosion in the exposure environment of the seawater. Whereas, the aluminum alloy exhibited an excellent and beneficial resistance to corrosion more than the Naval Brass studied. Despite the prohibitive cost, Naval Brass and aluminum alloy, indicated to have beneficial corrosion behavior that can offer wide range of application in seashore operations. The corrosion kinetics parameters indicated that the corrosion reaction is limited by diffusion mass transfer of the corrosion reaction elements and not by reaction controlled.

Keywords: alloys, chloride ions concentration, corrosion kinetics, corrosion rate, diffusion mass transfer, exposure time, seawater, weight loss

Procedia PDF Downloads 274
1498 A Pipeline for Detecting Copy Number Variation from Whole Exome Sequencing Using Comprehensive Tools

Authors: Cheng-Yang Lee, Petrus Tang, Tzu-Hao Chang

Abstract:

Copy number variations (CNVs) have played an important role in many kinds of human diseases, such as Autism, Schizophrenia and a number of cancers. Many diseases are found in genome coding regions and whole exome sequencing (WES) is a cost-effective and powerful technology in detecting variants that are enriched in exons and have potential applications in clinical setting. Although several algorithms have been developed to detect CNVs using WES and compared with other algorithms for finding the most suitable methods using their own samples, there were not consistent datasets across most of algorithms to evaluate the ability of CNV detection. On the other hand, most of algorithms is using command line interface that may greatly limit the analysis capability of many laboratories. We create a series of simulated WES datasets from UCSC hg19 chromosome 22, and then evaluate the CNV detective ability of 19 algorithms from OMICtools database using our simulated WES datasets. We compute the sensitivity, specificity and accuracy in each algorithm for validation of the exome-derived CNVs. After comparison of 19 algorithms from OMICtools database, we construct a platform to install all of the algorithms in a virtual machine like VirtualBox which can be established conveniently in local computers, and then create a simple script that can be easily to use for detecting CNVs using algorithms selected by users. We also build a table to elaborate on many kinds of events, such as input requirement, CNV detective ability, for all of the algorithms that can provide users a specification to choose optimum algorithms.

Keywords: whole exome sequencing, copy number variations, omictools, pipeline

Procedia PDF Downloads 286
1497 Power Transformers Insulation Material Investigations: Partial Discharge

Authors: Jalal M. Abdallah

Abstract:

There is a great problem in testing and investigations the reliability of different type of transformers insulation materials. It summarized in how to create and simulate the real conditions of working transformer and testing its insulation materials for Partial Discharge PD, typically as in the working mode. A lot of tests may give untrue results as the physical behavior of the insulation material differs under tests from its working condition. In this work, the real working conditions were simulated, and a large number of specimens have been tested. The investigations first stage, begin with choosing samples of different types of insulation materials (papers, pressboards, etc.). The second stage, the samples were dried in ovens at 105 C0and 0.01bar for 48 hours, and then impregnated with dried and gasless oil (the water content less than 6 ppm.) at 105 C0and 0.01bar for 48 hours, after so specimen cooling at room pressure and temperature for 24 hours. The third stage is investigating PD for the samples using ICM PD measuring device. After that, a continuous test on oil-impregnated insulation materials (paper, pressboards) was developed, and the phase resolved partial discharge pattern of PD signals was measured. The important of this work in providing the industrial sector with trusted high accurate measuring results based on real simulated working conditions. All the PD patterns (results) associated with a discharge produced in well-controlled laboratory condition. They compared with other previous and other laboratory results. In addition, the influence of different temperatures condition on the partial discharge activities was studied.

Keywords: transformers, insulation materials, voids, partial discharge

Procedia PDF Downloads 286
1496 Design and Optimization of Open Loop Supply Chain Distribution Network Using Hybrid K-Means Cluster Based Heuristic Algorithm

Authors: P. Suresh, K. Gunasekaran, R. Thanigaivelan

Abstract:

Radio frequency identification (RFID) technology has been attracting considerable attention with the expectation of improved supply chain visibility for consumer goods, apparel, and pharmaceutical manufacturers, as well as retailers and government procurement agencies. It is also expected to improve the consumer shopping experience by making it more likely that the products they want to purchase are available. Recent announcements from some key retailers have brought interest in RFID to the forefront. A modified K- Means Cluster based Heuristic approach, Hybrid Genetic Algorithm (GA) - Simulated Annealing (SA) approach, Hybrid K-Means Cluster based Heuristic-GA and Hybrid K-Means Cluster based Heuristic-GA-SA for Open Loop Supply Chain Network problem are proposed. The study incorporated uniform crossover operator and combined crossover operator in GAs for solving open loop supply chain distribution network problem. The algorithms are tested on 50 randomly generated data set and compared with each other. The results of the numerical experiments show that the Hybrid K-means cluster based heuristic-GA-SA, when tested on 50 randomly generated data set, shows superior performance to the other methods for solving the open loop supply chain distribution network problem.

Keywords: RFID, supply chain distribution network, open loop supply chain, genetic algorithm, simulated annealing

Procedia PDF Downloads 129
1495 Algorithm Development of Individual Lumped Parameter Modelling for Blood Circulatory System: An Optimization Study

Authors: Bao Li, Aike Qiao, Gaoyang Li, Youjun Liu

Abstract:

Background: Lumped parameter model (LPM) is a common numerical model for hemodynamic calculation. LPM uses circuit elements to simulate the human blood circulatory system. Physiological indicators and characteristics can be acquired through the model. However, due to the different physiological indicators of each individual, parameters in LPM should be personalized in order for convincing calculated results, which can reflect the individual physiological information. This study aimed to develop an automatic and effective optimization method to personalize the parameters in LPM of the blood circulatory system, which is of great significance to the numerical simulation of individual hemodynamics. Methods: A closed-loop LPM of the human blood circulatory system that is applicable for most persons were established based on the anatomical structures and physiological parameters. The patient-specific physiological data of 5 volunteers were non-invasively collected as personalized objectives of individual LPM. In this study, the blood pressure and flow rate of heart, brain, and limbs were the main concerns. The collected systolic blood pressure, diastolic blood pressure, cardiac output, and heart rate were set as objective data, and the waveforms of carotid artery flow and ankle pressure were set as objective waveforms. Aiming at the collected data and waveforms, sensitivity analysis of each parameter in LPM was conducted to determine the sensitive parameters that have an obvious influence on the objectives. Simulated annealing was adopted to iteratively optimize the sensitive parameters, and the objective function during optimization was the root mean square error between the collected waveforms and data and simulated waveforms and data. Each parameter in LPM was optimized 500 times. Results: In this study, the sensitive parameters in LPM were optimized according to the collected data of 5 individuals. Results show a slight error between collected and simulated data. The average relative root mean square error of all optimization objectives of 5 samples were 2.21%, 3.59%, 4.75%, 4.24%, and 3.56%, respectively. Conclusions: Slight error demonstrated good effects of optimization. The individual modeling algorithm developed in this study can effectively achieve the individualization of LPM for the blood circulatory system. LPM with individual parameters can output the individual physiological indicators after optimization, which are applicable for the numerical simulation of patient-specific hemodynamics.

Keywords: blood circulatory system, individual physiological indicators, lumped parameter model, optimization algorithm

Procedia PDF Downloads 116
1494 Design of Multiband Microstrip Antenna Using Stepped Cut Method for WLAN/WiMAX and C/Ku-Band Applications

Authors: Ahmed Boutejdar, Bishoy I. Halim, Soumia El Hani, Larbi Bellarbi, Amal Afyf

Abstract:

In this paper, a planar monopole antenna for multi band applications is proposed. The antenna structure operates at three operating frequencies at 3.7, 6.2, and 13.5 GHz which cover different communication frequency ranges. The antenna consists of a quasi-modified rectangular radiating patch with a partial ground plane and two parasitic elements (open-loop-ring resonators) to serve as coupling-bridges. A stepped cut at lower corners of the radiating patch and the partial ground plane are used, to achieve the multiband features. The proposed antenna is manufactured on the FR4 substrate and is simulated and optimized using High Frequency Simulation System (HFSS). The antenna topology possesses an area of 30.5 x 30 x 1.6 mm3. The measured results demonstrate that the candidate antenna has impedance bandwidths for 10 dB return loss and operates from 3.80 – 3.90 GHz, 4.10 – 5.20 GHz, 11.2 – 11.5 GHz and from 12.5 – 14.0 GHz, which meet the requirements of the wireless local area network (WLAN), worldwide interoperability for microwave access (WiMAX), C- (Uplink) and Ku- (Uplink) band applications. Acceptable agreement is obtained between measurement and simulation results. Experimental results show that the antenna is successfully simulated and measured, and the tri-band antenna can be achieved by adjusting the lengths of the three elements and it gives good gains across all the operation bands.

Keywords: planar monopole antenna, FR4 substrate, HFSS, WLAN, WiMAX, C and Ku

Procedia PDF Downloads 165
1493 Using Monte Carlo Model for Simulation of Rented Housing in Mashhad, Iran

Authors: Mohammad Rahim Rahnama

Abstract:

The study employs Monte Carlo method for simulation of rented housing in Mashhad second largest city in Iran. A total number of 334 rental residential units in Mashhad, including both apartments and houses (villa), were randomly selected from advertisements placed in Khorasan Newspapers during the months of July and August of 2015. In order to simulate the monthly rent price, the rent index was calculated through combining the mortgage and the rent price. In the next step, the relation between the variables of the floor area and that of the number of bedrooms for each unit, in both apartments and houses(villa), was calculated through multivariate regression using SPSS and was coded in XML. The initial model was called using simulation button in SPSS and was simulated using triangular and binominal algorithms. The findings revealed that the average simulated rental index was 548.5$ per month. Calculating the sensitivity of rental index to a number of bedrooms we found that firstly, 97% of units have three bedrooms, and secondly as the number of bedrooms increases from one to three, for the rent price of less than 200$, the percentage of units having one bedroom decreases from 10% to 0. Contrariwise, for units with the rent price of more than 571.4$, the percentage of bedrooms increases from 37% to 48%. In the light of these findings, it becomes clear that planning to build rental residential units, overseeing the rent prices, and granting subsidies to rental residential units, for apartments with two bedrooms, present a felicitous policy for regulating residential units in Mashhad.

Keywords: Mashhad, Monte Carlo, simulation, rent price, residential unit

Procedia PDF Downloads 247
1492 Impact of Data and Model Choices to Urban Flood Risk Assessments

Authors: Abhishek Saha, Serene Tay, Gerard Pijcke

Abstract:

The availability of high-resolution topography and rainfall information in urban areas has made it necessary to revise modeling approaches used for simulating flood risk assessments. Lidar derived elevation models that have 1m or lower resolutions are becoming widely accessible. The classical approaches of 1D-2D flow models where channel flow is simulated and coupled with a coarse resolution 2D overland flow models may not fully utilize the information provided by high-resolution data. In this context, a study was undertaken to compare three different modeling approaches to simulate flooding in an urban area. The first model used is the base model used is Sobek, which uses 1D model formulation together with hydrologic boundary conditions and couples with an overland flow model in 2D. The second model uses a full 2D model for the entire area with shallow water equations at the resolution of the digital elevation model (DEM). These models are compared against another shallow water equation solver in 2D, which uses a subgrid method for grid refinement. These models are simulated for different horizontal resolutions of DEM varying between 1m to 5m. The results show a significant difference in inundation extents and water levels for different DEMs. They are also sensitive to the different numerical models with the same physical parameters, such as friction. The study shows the importance of having reliable field observations of inundation extents and levels before a choice of model and data can be made for spatial flood risk assessments.

Keywords: flooding, DEM, shallow water equations, subgrid

Procedia PDF Downloads 116
1491 Optimized Design, Material Selection, and Improvement of Liners, Mother Plate, and Stone Box of a Direct Charge Transfer Chute in a Sinter Plant: A Computational Approach

Authors: Anamitra Ghosh, Neeladri Paul

Abstract:

The present work aims at investigating material combinations and thereby improvising an optimized design of liner-mother plate arrangement and that of the stone box, such that it has low cost, high weldability, sufficiently capable of withstanding the increased amount of corrosive shear and bending loads, and having reduced thermal expansion coefficient at temperatures close to 1000 degrees Celsius. All the above factors have been preliminarily examined using a computational approach via ANSYS Thermo-Structural Computation, a commercial software that uses the Finite Element Method to analyze the response of simulated design specimens of liner-mother plate arrangement and the stone box, to varied bending, shear, and thermal loads as well as to determine the temperature gradients developed across various surfaces of the designs. Finally, the optimized structural designs of the liner-mother plate arrangement and that of the stone box with improved material and better structural and thermal properties are selected via trial-and-error method. The final improvised design is therefore considered to enhance the overall life and reliability of a Direct Charge Transfer Chute that transfers and segregates the hot sinter onto the cooler in a sinter plant.

Keywords: shear, bending, thermal, sinter, simulated, optimized, charge, transfer, chute, expansion, computational, corrosive, stone box, liner, mother plate, arrangement, material

Procedia PDF Downloads 81
1490 Modeling Operating Theater Scheduling and Configuration: An Integrated Model in Health-Care Logistics

Authors: Sina Keyhanian, Abbas Ahmadi, Behrooz Karimi

Abstract:

We present a multi-objective binary programming model which considers surgical cases are scheduling among operating rooms and the configuration of surgical instruments in limited capacity hospital trays, simultaneously. Many mathematical models have been developed previously in the literature addressing different challenges in health-care logistics such as assigning operating rooms, leveling beds, etc. But what happens inside the operating rooms along with the inventory management of required instruments for various operations, and also their integration with surgical scheduling have been poorly discussed. Our model considers the minimization of movements between trays during a surgery which recalls the famous cell formation problem in group technology. This assumption can also provide a major potential contribution to robotic surgeries. The tray configuration problem which consumes surgical instruments requirement plan (SIRP) and sequence of surgical procedures based on required instruments (SIRO) is nested inside the bin packing problem. This modeling approach helps us understand that most of the same-output solutions will not be necessarily identical when it comes to the rearrangement of surgeries among rooms. A numerical example has been dealt with via a proposed nested simulated annealing (SA) optimization approach which provides insights about how various configurations inside a solution can alter the optimal condition.

Keywords: health-care logistics, hospital tray configuration, off-line bin packing, simulated annealing optimization, surgical case scheduling

Procedia PDF Downloads 248
1489 The Network Relative Model Accuracy (NeRMA) Score: A Method to Quantify the Accuracy of Prediction Models in a Concurrent External Validation

Authors: Carl van Walraven, Meltem Tuna

Abstract:

Background: Network meta-analysis (NMA) quantifies the relative efficacy of 3 or more interventions from studies containing a subgroup of interventions. This study applied the analytical approach of NMA to quantify the relative accuracy of prediction models with distinct inclusion criteria that are evaluated on a common population (‘concurrent external validation’). Methods: We simulated binary events in 5000 patients using a known risk function. We biased the risk function and modified its precision by pre-specified amounts to create 15 prediction models with varying accuracy and distinct patient applicability. Prediction model accuracy was measured using the Scaled Brier Score (SBS). Overall prediction model accuracy was measured using fixed-effects methods that accounted for model applicability patterns. Prediction model accuracy was summarized as the Network Relative Model Accuracy (NeRMA) Score which ranges from -∞ through 0 (accuracy of random guessing) to 1 (accuracy of most accurate model in concurrent external validation). Results: The unbiased prediction model had the highest SBS. The NeRMA score correctly ranked all simulated prediction models by the extent of bias from the known risk function. A SAS macro and R-function was created to implement the NeRMA Score. Conclusions: The NeRMA Score makes it possible to quantify the accuracy of binomial prediction models having distinct inclusion criteria in a concurrent external validation.

Keywords: prediction model accuracy, scaled brier score, fixed effects methods, concurrent external validation

Procedia PDF Downloads 191
1488 Validation of SWAT Model for Prediction of Water Yield and Water Balance: Case Study of Upstream Catchment of Jebba Dam in Nigeria

Authors: Adeniyi G. Adeogun, Bolaji F. Sule, Adebayo W. Salami, Michael O. Daramola

Abstract:

Estimation of water yield and water balance in a river catchment is critical to the sustainable management of water resources at watershed level in any country. Therefore, in the present study, Soil and Water Assessment Tool (SWAT) interfaced with Geographical Information System (GIS) was applied as a tool to predict water balance and water yield of a catchment area in Nigeria. The catchment area, which was 12,992km2, is located upstream Jebba hydropower dam in North central part of Nigeria. In this study, data on the observed flow were collected and compared with simulated flow using SWAT. The correlation between the two data sets was evaluated using statistical measures, such as, Nasch-Sucliffe Efficiency (NSE) and coefficient of determination (R2). The model output shows a good agreement between the observed flow and simulated flow as indicated by NSE and R2, which were greater than 0.7 for both calibration and validation period. A total of 42,733 mm of water was predicted by the calibrated model as the water yield potential of the basin for a simulation period 1985 to 2010. This interesting performance obtained with SWAT model suggests that SWAT model could be a promising tool to predict water balance and water yield in sustainable management of water resources. In addition, SWAT could be applied to other water resources in other basins in Nigeria as a decision support tool for sustainable water management in Nigeria.

Keywords: GIS, modeling, sensitivity analysis, SWAT, water yield, watershed level

Procedia PDF Downloads 396
1487 Electricity Load Modeling: An Application to Italian Market

Authors: Giovanni Masala, Stefania Marica

Abstract:

Forecasting electricity load plays a crucial role regards decision making and planning for economical purposes. Besides, in the light of the recent privatization and deregulation of the power industry, the forecasting of future electricity load turned out to be a very challenging problem. Empirical data about electricity load highlights a clear seasonal behavior (higher load during the winter season), which is partly due to climatic effects. We also emphasize the presence of load periodicity at a weekly basis (electricity load is usually lower on weekends or holidays) and at daily basis (electricity load is clearly influenced by the hour). Finally, a long-term trend may depend on the general economic situation (for example, industrial production affects electricity load). All these features must be captured by the model. The purpose of this paper is then to build an hourly electricity load model. The deterministic component of the model requires non-linear regression and Fourier series while we will investigate the stochastic component through econometrical tools. The calibration of the parameters’ model will be performed by using data coming from the Italian market in a 6 year period (2007- 2012). Then, we will perform a Monte Carlo simulation in order to compare the simulated data respect to the real data (both in-sample and out-of-sample inspection). The reliability of the model will be deduced thanks to standard tests which highlight a good fitting of the simulated values.

Keywords: ARMA-GARCH process, electricity load, fitting tests, Fourier series, Monte Carlo simulation, non-linear regression

Procedia PDF Downloads 377
1486 Unlocking the Puzzle of Borrowing Adult Data for Designing Hybrid Pediatric Clinical Trials

Authors: Rajesh Kumar G

Abstract:

A challenging aspect of any clinical trial is to carefully plan the study design to meet the study objective in optimum way and to validate the assumptions made during protocol designing. And when it is a pediatric study, there is the added challenge of stringent guidelines and difficulty in recruiting the necessary subjects. Unlike adult trials, there is not much historical data available for pediatrics, which is required to validate assumptions for planning pediatric trials. Typically, pediatric studies are initiated as soon as approval is obtained for a drug to be marketed for adults, so with the adult study historical information and with the available pediatric pilot study data or simulated pediatric data, the pediatric study can be well planned. Generalizing the historical adult study for new pediatric study is a tedious task; however, it is possible by integrating various statistical techniques and utilizing the advantage of hybrid study design, which will help to achieve the study objective in a smoother way even with the presence of many constraints. This research paper will explain how well the hybrid study design can be planned along with integrated technique (SEV) to plan the pediatric study; In brief the SEV technique (Simulation, Estimation (using borrowed adult data and applying Bayesian methods)) incorporates the use of simulating the planned study data and getting the desired estimates to Validate the assumptions.This method of validation can be used to improve the accuracy of data analysis, ensuring that results are as valid and reliable as possible, which allow us to make informed decisions well ahead of study initiation. With professional precision, this technique based on the collected data allows to gain insight into best practices when using data from historical study and simulated data alike.

Keywords: adaptive design, simulation, borrowing data, bayesian model

Procedia PDF Downloads 46
1485 An Investigation of System and Operating Parameters on the Performance of Parabolic Trough Solar Collector for Power Generation

Authors: Umesh Kumar Sinha, Y. K. Nayak, N. Kumar, Swapnil Saurav, Monika Kashyap

Abstract:

The authors investigate the effect of system and operating parameters on the performance of high temperature solar concentrator for power generation. The effects of system and operating parameters were investigated using the developed mathematical expressions for collector efficiency, heat removal factor, fluid outlet temperature and power, etc. The results were simulated using C++program. The simulated results were plotted for investigation like effect of thermal loss parameter and radiative loss parameters on the collector efficiency, heat removal factor, fluid outlet temperature, rise of temperature and effect of mass flow rate of the fluid outlet temperature. In connection with the power generation, plots were drawn for the effect of (TM–TAMB) on the variation of concentration efficiency, concentrator irradiance on PM/PMN, evaporation temperature on thermal to electric power efficiency (Conversion efficiency) of the plant and overall efficiency of solar power plant.

Keywords: parabolic trough solar collector, radiative and thermal loss parameters, collector efficiency, heat removal factor, fluid outlet and inlet temperatures, rise of temperature, mass flow rate, conversion efficiency, concentrator irradiance

Procedia PDF Downloads 293
1484 An Assessment of Different Blade Tip Timing (BTT) Algorithms Using an Experimentally Validated Finite Element Model Simulator

Authors: Mohamed Mohamed, Philip Bonello, Peter Russhard

Abstract:

Blade Tip Timing (BTT) is a technology concerned with the estimation of both frequency and amplitude of rotating blades. A BTT system comprises two main parts: (a) the arrival time measurement system, and (b) the analysis algorithms. Simulators play an important role in the development of the analysis algorithms since they generate blade tip displacement data from the simulated blade vibration under controlled conditions. This enables an assessment of the performance of the different algorithms with respect to their ability to accurately reproduce the original simulated vibration. Such an assessment is usually not possible with real engine data since there is no practical alternative to BTT for blade vibration measurement. Most simulators used in the literature are based on a simple spring-mass-damper model to determine the vibration. In this work, a more realistic experimentally validated simulator based on the Finite Element (FE) model of a bladed disc (blisk) is first presented. It is then used to generate the necessary data for the assessment of different BTT algorithms. The FE modelling is validated using both a hammer test and two firewire cameras for the mode shapes. A number of autoregressive methods, fitting methods and state-of-the-art inverse methods (i.e. Russhard) are compared. All methods are compared with respect to both synchronous and asynchronous excitations with both single and simultaneous frequencies. The study assesses the applicability of each method for different conditions of vibration, amount of sampling data, and testing facilities, according to its performance and efficiency under these conditions.

Keywords: blade tip timing, blisk, finite element, vibration measurement

Procedia PDF Downloads 286
1483 Design of an Acoustic Imaging Sensor Array for Mobile Robots

Authors: Dibyendu Roy, V. Ramu Reddy, Parijat Deshpande, Ranjan Dasgupta

Abstract:

Imaging of underwater objects is primarily conducted by acoustic imagery due to the severe attenuation of electro-magnetic waves in water. Acoustic imagery underwater has varied range of significant applications such as side-scan sonar, mine hunting sonar. It also finds utility in other domains such as imaging of body tissues via ultrasonography and non-destructive testing of objects. In this paper, we explore the feasibility of using active acoustic imagery in air and simulate phased array beamforming techniques available in literature for various array designs to achieve a suitable acoustic sensor array design for a portable mobile robot which can be applied to detect the presence/absence of anomalous objects in a room. The multi-path reflection effects especially in enclosed rooms and environmental noise factors are currently not simulated and will be dealt with during the experimental phase. The related hardware is designed with the same feasibility criterion that the developed system needs to be deployed on a portable mobile robot. There is a trade of between image resolution and range with the array size, number of elements and the imaging frequency and has to be iteratively simulated to achieve the desired acoustic sensor array design. The designed acoustic imaging array system is to be mounted on a portable mobile robot and targeted for use in surveillance missions for intruder alerts and imaging objects during dark and smoky scenarios where conventional optic based systems do not function well.

Keywords: acoustic sensor array, acoustic imagery, anomaly detection, phased array beamforming

Procedia PDF Downloads 380
1482 A Three-Dimensional (3D) Numerical Study of Roofs Shape Impact on Air Quality in Urban Street Canyons with Tree Planting

Authors: Bouabdellah Abed, Mohamed Bouzit, Lakhdar Bouarbi

Abstract:

The objective of this study is to investigate numerically the effect of roof shaped on wind flow and pollutant dispersion in a street canyon with one row of trees of pore volume, Pvol = 96%. A three-dimensional computational fluid dynamics (CFD) model for evaluating air flow and pollutant dispersion within an urban street canyon using Reynolds-averaged Navier–Stokes (RANS) equations and the k-Epsilon EARSM turbulence model as close of the equation system. The numerical model is performed with ANSYS-CFX code. Vehicle emissions were simulated as double line sources along the street. The numerical model was validated against the wind tunnel experiment. Having established this, the wind flow and pollutant dispersion in urban street canyons of six roof shapes are simulated. The numerical simulation agrees reasonably with the wind tunnel data. The results obtained in this work, indicate that the flow in 3D domain is more complicated, this complexity is increased with presence of tree and variability of the roof shapes. The results also indicated that the largest pollutant concentration level for two walls (leeward and windward wall) is observed with the upwind wedge-shaped roof. But the smallest pollutant concentration level is observed with the dome roof-shaped. The results also indicated that the corners eddies provide additional ventilation and lead to lower traffic pollutant concentrations at the street canyon ends.

Keywords: street canyon, pollutant dispersion, trees, building configuration, numerical simulation, k-Epsilon EARSM

Procedia PDF Downloads 328
1481 A Standard Operating Procedure (SOP) for Forensic Soil Analysis: Tested Using a Simulated Crime Scene

Authors: Samara A. Testoni, Vander F. Melo, Lorna A. Dawson, Fabio A. S. Salvador

Abstract:

Soil traces are useful as forensic evidence due to their potential to transfer and adhere to different types of surfaces on a range of objects or persons. The great variability expressed by soil physical, chemical, biological and mineralogical properties show soil traces as complex mixtures. Soils are continuous and variable, no two soil samples being indistinguishable, nevertheless, the complexity of soil characteristics can provide powerful evidence for comparative forensic purposes. This work aimed to establish a Standard Operating Procedure (SOP) for forensic soil analysis in Brazil. We carried out a simulated crime scene with double blind sampling to calibrate the sampling procedures. Samples were collected at a range of locations covering a range of soil types found in South of Brazil: Santa Candida and Boa Vista, neighbourhoods from Curitiba (State of Parana) and in Guarani and Guaraituba, neighbourhoods from Colombo (Curitiba Metropolitan Region). A previously validated sequential analyses of chemical, physical and mineralogical analyses was developed in around 2 g of soil. The suggested SOP and the sequential range of analyses were effective in grouping the samples from the same place and from the same parent material together, as well as successfully discriminated samples from different locations and originated from different rocks. In addition, modifications to the sample treatment and analytical protocol can be made depending on the context of the forensic work.

Keywords: clay mineralogy, forensic soils analysis, sequential analyses, kaolinite, gibbsite

Procedia PDF Downloads 224
1480 Application of Biopolymer for Adsorption of Methylene Blue Dye from Simulated Effluent: A Green Method for Textile Industry Wastewater Treatment

Authors: Rabiya, Ramkrishna Sen

Abstract:

The textile industry releases huge volume of effluent containing reactive dyes in the nearby water bodies. These effluents are significant source of water pollution since most of the dyes are toxic in nature. Moreover, it scavenges the dissolved oxygen essential to the aquatic species. Therefore, it is necessary to treat the dye effluent before it is discharged in the nearby water bodies. The present study focuses on removing the basic dye methylene blue from simulated wastewater using biopolymer. The biopolymer was partially purified from the culture of Bacillus licheniformis by ultrafiltration. Based on the elution profile of the biopolymer from ion exchange column, it was found to be a negatively charged molecule. Its net anionic nature allows the biopolymer to adsorb positively charged molecule, methylene blue. The major factors which influence the removal of dye by the biopolymer such as incubation time, pH, initial dye concentration were evaluated. The methylene blue uptake by the biopolymer is more (14.84 mg/g) near neutral pH than in acidic pH (12.05mg/g) of the water. At low pH, the lower dissociation of the dye molecule as well as the low negative charge available on the biopolymer reduces the interaction between the biopolymer and dye. The optimum incubation time for maximum removal of dye was found to be 60 min. The entire study was done with 25 mL of dye solution in 100 mL flask at 25 °C with an amount of 11g/L of biopolymer. To study the adsorption isotherm, the dye concentration was varied in the range of 25mg/L to 205mg/L. The dye uptake by the biopolymer against the equilibrium concentration was plotted. The plot indicates that the adsorption of dye by biopolymer follows the Freundlich adsorption isotherm (R-square 0.99). Hence, these studies indicate the potential use of biopolymer for the removal of basic dye from textile wastewater in an ecofriendly and sustainable way.

Keywords: biopolymer, methylene blue dye, textile industry, wastewater

Procedia PDF Downloads 116
1479 Simulation of Optimal Runoff Hydrograph Using Ensemble of Radar Rainfall and Blending of Runoffs Model

Authors: Myungjin Lee, Daegun Han, Jongsung Kim, Soojun Kim, Hung Soo Kim

Abstract:

Recently, the localized heavy rainfall and typhoons are frequently occurred due to the climate change and the damage is becoming bigger. Therefore, we may need a more accurate prediction of the rainfall and runoff. However, the gauge rainfall has the limited accuracy in space. Radar rainfall is better than gauge rainfall for the explanation of the spatial variability of rainfall but it is mostly underestimated with the uncertainty involved. Therefore, the ensemble of radar rainfall was simulated using error structure to overcome the uncertainty and gauge rainfall. The simulated ensemble was used as the input data of the rainfall-runoff models for obtaining the ensemble of runoff hydrographs. The previous studies discussed about the accuracy of the rainfall-runoff model. Even if the same input data such as rainfall is used for the runoff analysis using the models in the same basin, the models can have different results because of the uncertainty involved in the models. Therefore, we used two models of the SSARR model which is the lumped model, and the Vflo model which is a distributed model and tried to simulate the optimum runoff considering the uncertainty of each rainfall-runoff model. The study basin is located in Han river basin and we obtained one integrated runoff hydrograph which is an optimum runoff hydrograph using the blending methods such as Multi-Model Super Ensemble (MMSE), Simple Model Average (SMA), Mean Square Error (MSE). From this study, we could confirm the accuracy of rainfall and rainfall-runoff model using ensemble scenario and various rainfall-runoff model and we can use this result to study flood control measure due to climate change. Acknowledgements: This work is supported by the Korea Agency for Infrastructure Technology Advancement(KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 18AWMP-B083066-05).

Keywords: radar rainfall ensemble, rainfall-runoff models, blending method, optimum runoff hydrograph

Procedia PDF Downloads 248
1478 Perceived Restorativeness Scale– 6: A Short Version of the Perceived Restorativeness Scale for Mixed (or Mobile) Devices

Authors: Sara Gallo, Margherita Pasini, Margherita Brondino, Daniela Raccanello, Roberto Burro, Elisa Menardo

Abstract:

Most of the studies on the ability of environments to recover people’s cognitive resources have been conducted in laboratory using simulated environments (e.g., photographs, videos, or virtual reality), based on the implicit assumption that exposure to simulated environments has the same effects of exposure to real environments. However, the technical characteristics of simulated environments, such as the dynamic or static characteristics of the stimulus, critically affect their perception. Measuring perceived restorativeness in situ rather than in laboratory could increase the validity of the obtained measurements. Personal mobile devices could be useful because they allow accessing immediately online surveys when people are directly exposed to an environment. At the same time, it becomes important to develop short and reliable measuring instruments that allow a quick assessment of the restorative qualities of the environments. One of the frequently used self-report measures to assess perceived restorativeness is the “Perceived Restorativeness Scale” (PRS) based on Attention Restoration Theory. A lot of different versions have been proposed and used according to different research purposes and needs, without studying their validity. This longitudinal study reported some preliminary validation analyses on a short version of original scale, the PRS-6, developed to be quick and mobile-friendly. It is composed of 6 items assessing fascination and being-away. 102 Italian university students participated to the study, 84% female with age ranging from 18 to 47 (M = 20.7; SD = 2.9). Data were obtained through a survey online that asked them to report their perceived restorativeness of the environment they were in (and the kind of environment) and their positive emotion (Positive and Negative Affective Schedule, PANAS) once a day for seven days. Cronbach alpha and item-total correlations were used to assess reliability and internal consistency. Confirmatory Factor Analyses (CFA) models were run to study the factorial structure (construct validity). Correlation analyses between PRS and PANAS scores were used to check discriminant validity. In the end, multigroup CFA models were used to study measurement invariance (configural, metric, scalar, strict) between different mobile devices and between day of assessment. On the whole, the PRS-6 showed good psychometric proprieties, similar to those of the original scale, and invariance across devices and days. These results suggested that the PRS-6 could be a valid alternative to assess perceived restorativeness when researchers need a brief and immediate evaluation of the recovery quality of an environment.

Keywords: restorativeness, validation, short scale development, psychometrics proprieties

Procedia PDF Downloads 220
1477 Monte Carlo Simulation Study on Improving the Flatting Filter-Free Radiotherapy Beam Quality Using Filters from Low- z Material

Authors: H. M. Alfrihidi, H.A. Albarakaty

Abstract:

Flattening filter-free (FFF) photon beam radiotherapy has increased in the last decade, which is enabled by advancements in treatment planning systems and radiation delivery techniques like multi-leave collimators. FFF beams have higher dose rates, which reduces treatment time. On the other hand, FFF beams have a higher surface dose, which is due to the loss of beam hardening effect caused by the presence of the flatting filter (FF). The possibility of improving FFF beam quality using filters from low-z materials such as steel and aluminium (Al) was investigated using Monte Carlo (MC) simulations. The attenuation coefficient of low-z materials for low-energy photons is higher than that of high-energy photons, which leads to the hardening of the FFF beam and, consequently, a reduction in the surface dose. BEAMnrc user code, based on Electron Gamma Shower (EGSnrc) MC code, is used to simulate the beam of a 6 MV True-Beam linac. A phase-space (phosphor) file provided by Varian Medical Systems was used as a radiation source in the simulation. This phosphor file was scored just above the jaws at 27.88 cm from the target. The linac from the jaw downward was constructed, and radiation passing was simulated and scored at 100 cm from the target. To study the effect of low-z filters, steel and Al filters with a thickness of 1 cm were added below the jaws, and the phosphor file was scored at 100 cm from the target. For comparison, the FF beam was simulated using a similar setup. (BEAM Data Processor (BEAMdp) is used to analyse the energy spectrum in the phosphorus files. Then, the dose distribution resulting from these beams was simulated in a homogeneous water phantom using DOSXYZnrc. The dose profile was evaluated according to the surface dose, the lateral dose distribution, and the percentage depth dose (PDD). The energy spectra of the beams show that the FFF beam is softer than the FF beam. The energy peaks for the FFF and FF beams are 0.525 MeV and 1.52 MeV, respectively. The FFF beam's energy peak becomes 1.1 MeV using a steel filter, while the Al filter does not affect the peak position. Steel and Al's filters reduced the surface dose by 5% and 1.7%, respectively. The dose at a depth of 10 cm (D10) rises by around 2% and 0.5% due to using a steel and Al filter, respectively. On the other hand, steel and Al filters reduce the dose rate of the FFF beam by 34% and 14%, respectively. However, their effect on the dose rate is less than that of the tungsten FF, which reduces the dose rate by about 60%. In conclusion, filters from low-z material decrease the surface dose and increase the D10 dose, allowing for a high-dose delivery to deep tumors with a low skin dose. Although using these filters affects the dose rate, this effect is much lower than the effect of the FF.

Keywords: flattening filter free, monte carlo, radiotherapy, surface dose

Procedia PDF Downloads 52
1476 Unsteady Flow Simulations for Microchannel Design and Its Fabrication for Nanoparticle Synthesis

Authors: Mrinalini Amritkar, Disha Patil, Swapna Kulkarni, Sukratu Barve, Suresh Gosavi

Abstract:

Micro-mixers play an important role in the lab-on-a-chip applications and micro total analysis systems to acquire the correct level of mixing for any given process. The mixing process can be classified as active or passive according to the use of external energy. Literature of microfluidics reports that most of the work is done on the models of steady laminar flow; however, the study of unsteady laminar flow is an active area of research at present. There are wide applications of this, out of which, we consider nanoparticle synthesis in micro-mixers. In this work, we have developed a model for unsteady flow to study the mixing performance of a passive micro mixer for reactants used for such synthesis. The model is developed in Finite Volume Method (FVM)-based software, OpenFOAM. The model is tested by carrying out the simulations at Re of 0.5. Mixing performance of the micro-mixer is investigated using simulated concentration values of mixed species across the width of the micro-mixer and calculating the variance across a line profile. Experimental validation is done by passing dyes through a Y shape micro-mixer fabricated using polydimethylsiloxane (PDMS) polymer and comparing variances with the simulated ones. Gold nanoparticles are later synthesized through the micro-mixer and collected at two different times leading to significantly different size distributions. These times match with the time scales over which reactant concentrations vary as obtained from simulations. Our simulations could thus be used to create design aids for passive micro-mixers used in nanoparticle synthesis.

Keywords: Lab-on-chip, LOC, micro-mixer, OpenFOAM, PDMS

Procedia PDF Downloads 137
1475 Large Eddy Simulation of Hydrogen Deflagration in Open Space and Vented Enclosure

Authors: T. Nozu, K. Hibi, T. Nishiie

Abstract:

This paper discusses the applicability of the numerical model for a damage prediction method of the accidental hydrogen explosion occurring in a hydrogen facility. The numerical model was based on an unstructured finite volume method (FVM) code “NuFD/FrontFlowRed”. For simulating unsteady turbulent combustion of leaked hydrogen gas, a combination of Large Eddy Simulation (LES) and a combustion model were used. The combustion model was based on a two scalar flamelet approach, where a G-equation model and a conserved scalar model expressed a propagation of premixed flame surface and a diffusion combustion process, respectively. For validation of this numerical model, we have simulated the previous two types of hydrogen explosion tests. One is open-space explosion test, and the source was a prismatic 5.27 m3 volume with 30% of hydrogen-air mixture. A reinforced concrete wall was set 4 m away from the front surface of the source. The source was ignited at the bottom center by a spark. The other is vented enclosure explosion test, and the chamber was 4.6 m × 4.6 m × 3.0 m with a vent opening on one side. Vent area of 5.4 m2 was used. Test was performed with ignition at the center of the wall opposite the vent. Hydrogen-air mixtures with hydrogen concentrations close to 18% vol. were used in the tests. The results from the numerical simulations are compared with the previous experimental data for the accuracy of the numerical model, and we have verified that the simulated overpressures and flame time-of-arrival data were in good agreement with the results of the previous two explosion tests.

Keywords: deflagration, large eddy simulation, turbulent combustion, vented enclosure

Procedia PDF Downloads 222
1474 Dynamic Simulation of Disintegration of Wood Chips Caused by Impact and Collisions during the Steam Explosion Pre-Treatment

Authors: Muhammad Muzamal, Anders Rasmuson

Abstract:

Wood material is extensively considered as a raw material for the production of bio-polymers, bio-fuels and value-added chemicals. However, the shortcoming in using wood as raw material is that the enzymatic hydrolysis of wood material is difficult because the accessibility of enzymes to hemicelluloses and cellulose is hindered by complex chemical and physical structure of the wood. The steam explosion (SE) pre-treatment improves the digestion of wood material by creating both chemical and physical modifications in wood. In this process, first, wood chips are treated with steam at high pressure and temperature for a certain time in a steam treatment vessel. During this time, the chemical linkages between lignin and polysaccharides are cleaved and stiffness of material decreases. Then the steam discharge valve is rapidly opened and the steam and wood chips exit the vessel at very high speed. These fast moving wood chips collide with each other and with walls of the equipment and disintegrate to small pieces. More damaged and disintegrated wood have larger surface area and increased accessibility to hemicelluloses and cellulose. The energy required for an increase in specific surface area by same value is 70 % more in conventional mechanical technique, i.e. attrition mill as compared to steam explosion process. The mechanism of wood disintegration during the SE pre-treatment is very little studied. In this study, we have simulated collision and impact of wood chips (dimension 20 mm x 20 mm x 4 mm) with each other and with walls of the vessel. The wood chips are simulated as a 3D orthotropic material. Damage and fracture in the wood material have been modelled using 3D Hashin’s damage model. This has been accomplished by developing a user-defined subroutine and implementing it in the FE software ABAQUS. The elastic and strength properties used for simulation are of spruce wood at 12% and 30 % moisture content and at 20 and 160 OC because the impacted wood chips are pre-treated with steam at high temperature and pressure. We have simulated several cases to study the effects of elastic and strength properties of wood, velocity of moving chip and orientation of wood chip at the time of impact on the damage in the wood chips. The disintegration patterns captured by simulations are very similar to those observed in experimentally obtained steam exploded wood. Simulation results show that the wood chips moving with higher velocity disintegrate more. Moisture contents and temperature decreases elastic properties and increases damage. Impact and collision in specific directions cause easy disintegration. This model can be used to efficiently design the steam explosion equipment.

Keywords: dynamic simulation, disintegration of wood, impact, steam explosion pretreatment

Procedia PDF Downloads 371