Search results for: Distribution Feeder Reconfiguration (DFR)
1276 The Impacts of Local Decision Making on Customisation Process Speed across Distributed Boundaries: A Case Study
Authors: A. M. Qahtani, G. B. Wills, A. M. Gravell
Abstract:
Communicating and managing customers’ requirements in software development projects play a vital role in the software development process. While it is difficult to do so locally, it is even more difficult to communicate these requirements over distributed boundaries and to convey them to multiple distribution customers. This paper discusses the communication of multiple distribution customers’ requirements in the context of customised software products. The main purpose is to understand the challenges of communicating and managing customisation requirements across distributed boundaries. We propose a model for Communicating Customisation Requirements of Multi-Clients in a Distributed Domain (CCRD). Thereafter, we evaluate that model by presenting the findings of a case study conducted with a company with customisation projects for 18 distributed customers. Then, we compare the outputs of the real case process and the outputs of the CCRD model using simulation methods. Our conjecture is that the CCRD model can reduce the challenge of communication requirements over distributed organisational boundaries, and the delay in decision making and in the entire customisation process time.
Keywords: Customisation Software Products, Global Software Engineering, Local Decision Making, Requirement Engineering, Simulation Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18961275 Application of Gamma Frailty Model in Survival of Liver Cirrhosis Patients
Authors: Elnaz Saeedi, Jamileh Abolaghasemi, Mohsen Nasiri Tousi, Saeedeh Khosravi
Abstract:
Goals and Objectives: A typical analysis of survival data involves the modeling of time-to-event data, such as the time till death. A frailty model is a random effect model for time-to-event data, where the random effect has a multiplicative influence on the baseline hazard function. This article aims to investigate the use of gamma frailty model with concomitant variable in order to individualize the prognostic factors that influence the liver cirrhosis patients’ survival times. Methods: During the one-year study period (May 2008-May 2009), data have been used from the recorded information of patients with liver cirrhosis who were scheduled for liver transplantation and were followed up for at least seven years in Imam Khomeini Hospital in Iran. In order to determine the effective factors for cirrhotic patients’ survival in the presence of latent variables, the gamma frailty distribution has been applied. In this article, it was considering the parametric model, such as Exponential and Weibull distributions for survival time. Data analysis is performed using R software, and the error level of 0.05 was considered for all tests. Results: 305 patients with liver cirrhosis including 180 (59%) men and 125 (41%) women were studied. The age average of patients was 39.8 years. At the end of the study, 82 (26%) patients died, among them 48 (58%) were men and 34 (42%) women. The main cause of liver cirrhosis was found hepatitis 'B' with 23%, followed by cryptogenic with 22.6% were identified as the second factor. Generally, 7-year’s survival was 28.44 months, for dead patients and for censoring was 19.33 and 31.79 months, respectively. Using multi-parametric survival models of progressive and regressive, Exponential and Weibull models with regard to the gamma frailty distribution were fitted to the cirrhosis data. In both models, factors including, age, bilirubin serum, albumin serum, and encephalopathy had a significant effect on survival time of cirrhotic patients. Conclusion: To investigate the effective factors for the time of patients’ death with liver cirrhosis in the presence of latent variables, gamma frailty model with parametric distributions seems desirable.
Keywords: Frailty model, latent variables, liver cirrhosis, parametric distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10571274 A New Approach for Image Segmentation using Pillar-Kmeans Algorithm
Authors: Ali Ridho Barakbah, Yasushi Kiyoki
Abstract:
This paper presents a new approach for image segmentation by applying Pillar-Kmeans algorithm. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies K-means clustering to the image segmentation after optimized by Pillar Algorithm. The Pillar algorithm considers the pillars- placement which should be located as far as possible from each other to withstand against the pressure distribution of a roof, as identical to the number of centroids amongst the data distribution. This algorithm is able to optimize the K-means clustering for image segmentation in aspects of precision and computation time. It designates the initial centroids- positions by calculating the accumulated distance metric between each data point and all previous centroids, and then selects data points which have the maximum distance as new initial centroids. This algorithm distributes all initial centroids according to the maximum accumulated distance metric. This paper evaluates the proposed approach for image segmentation by comparing with K-means and Gaussian Mixture Model algorithm and involving RGB, HSV, HSL and CIELAB color spaces. The experimental results clarify the effectiveness of our approach to improve the segmentation quality in aspects of precision and computational time.Keywords: Image segmentation, K-means clustering, Pillaralgorithm, color spaces.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33711273 Three-Dimensional Simulation of Free Electron Laser with Prebunching and Efficiency Enhancement
Authors: M. Chitsazi, B. Maraghechi, M. H. Rouhani
Abstract:
Three-dimensional simulation of harmonic up generation in free electron laser amplifier operating simultaneously with a cold and relativistic electron beam is presented in steady-state regime where the slippage of the electromagnetic wave with respect to the electron beam is ignored. By using slowly varying envelope approximation and applying the source-dependent expansion to wave equations, electromagnetic fields are represented in terms of the Hermit Gaussian modes which are well suited for the planar wiggler configuration. The electron dynamics is described by the fully threedimensional Lorentz force equation in presence of the realistic planar magnetostatic wiggler and electromagnetic fields. A set of coupled nonlinear first-order differential equations is derived and solved numerically. The fundamental and third harmonic radiation of the beam is considered. In addition to uniform beam, prebunched electron beam has also been studied. For this effect of sinusoidal distribution of entry times for the electron beam on the evolution of radiation is compared with uniform distribution. It is shown that prebunching reduces the saturation length substantially. For efficiency enhancement the wiggler is set to decrease linearly when the radiation of the third harmonic saturates. The optimum starting point of tapering and the slope of radiation in the amplitude of wiggler are found by successive run of the code.Keywords: Free electron laser, Prebunching, Undulator, Wiggler.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14621272 Mechanical Properties and Chloride Diffusion of Ceramic Waste Aggregate Mortar Containing Ground Granulated Blast–Furnace Slag
Authors: H. Higashiyama, M. Sappakittipakorn, M. Mizukoshi, O. Takahashi
Abstract:
Ceramic Waste Aggregates (CWAs) were made from electric porcelain insulator wastes supplied from an electric power company, which were crushed and ground to fine aggregate sizes. In this study, to develop the CWA mortar as an eco–efficient, ground granulated blast–furnace slag (GGBS) as a Supplementary Cementitious Material (SCM) was incorporated. The water–to–binder ratio (W/B) of the CWA mortars was varied at 0.4, 0.5, and 0.6. The cement of the CWA mortar was replaced by GGBS at 20 and 40% by volume (at about 18 and 37% by weight). Mechanical properties of compressive and splitting tensile strengths, and elastic modulus were evaluated at the age of 7, 28, and 91 days. Moreover, the chloride ingress test was carried out on the CWA mortars in a 5.0% NaCl solution for 48 weeks. The chloride diffusion was assessed by using an electron probe microanalysis (EPMA). To consider the relation of the apparent chloride diffusion coefficient and the pore size, the pore size distribution test was also performed using a mercury intrusion porosimetry at the same time with the EPMA. The compressive strength of the CWA mortars with the GGBS was higher than that without the GGBS at the age of 28 and 91 days. The resistance to the chloride ingress of the CWA mortar was effective in proportion to the GGBS replacement level.Keywords: Ceramic waste aggregate, Chloride diffusion, GGBS, Pore size distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20011271 Additive Friction Stir Manufacturing Process: Interest in Understanding Thermal Phenomena and Numerical Modeling of the Temperature Rise Phase
Authors: A. Lauvray, F. Poulhaon, P. Michaud, P. Joyot, E. Duc
Abstract:
Additive Friction Stir Manufacturing, or AFSM, is a new industrial process that follows the emergence of friction-based processes. The AFSM process is a solid-state additive process using the energy produced by the friction at the interface between a rotating non-consumable tool and a substrate. Friction depends on various parameters like axial force, rotation speed or friction coefficient. The feeder material is a metallic rod that flows through a hole in the tool. There is still a lack in understanding of the physical phenomena taking place during the process. This research aims at a better AFSM process understanding and implementation, thanks to numerical simulation and experimental validation performed on a prototype effector. Such an approach is considered a promising way for studying the influence of the process parameters and to finally identify a process window that seems relevant. The deposition of material through the AFSM process takes place in several phases. In chronological order these phases are the docking phase, the dwell time phase, the deposition phase, and the removal phase. The present work focuses on the dwell time phase that enables the temperature rise of the system due to pure friction. An analytic modeling of heat generation based on friction considers as main parameters the rotational speed and the contact pressure. Another parameter considered influential is the friction coefficient assumed to be variable, due to the self-lubrication of the system with the rise in temperature or the materials in contact roughness smoothing over time. This study proposes through a numerical modeling followed by an experimental validation to question the influence of the various input parameters on the dwell time phase. Rotation speed, temperature, spindle torque and axial force are the main monitored parameters during experimentations and serve as reference data for the calibration of the numerical model. This research shows that the geometry of the tool as well as fluctuations of the input parameters like axial force and rotational speed are very influential on the temperature reached and/or the time required to reach the targeted temperature. The main outcome is the prediction of a process window which is a key result for a more efficient process implementation.
Keywords: numerical model, additive manufacturing, frictional heat generation, process
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5141270 The Calculation of Electromagnetic Fields (EMF) in Substations of Shopping Centers
Authors: Adnan Muharemovic, Hidajet Salkic, Mario Klaric, Irfan Turkovic, Aida Muharemovic
Abstract:
In nature, electromagnetic fields always appear like atmosphere static electric field, the earth's static magnetic field and the wide-rang frequency electromagnetic field caused by lightening. However, besides natural electromagnetic fields (EMF), today human beings are mostly exposed to artificial electromagnetic fields due to technology progress and outspread use of electrical devices. To evaluate nuisance of EMF, it is necessary to know field intensity for every frequency which appears and compare it with allowed values. Low frequency EMF-s around transmission and distribution lines are time-varying quasi-static electromagnetic fields which have conservative component of low frequency electrical field caused by charges and eddy component of low frequency magnetic field caused by currents. Displacement current or field delay are negligible, so energy flow in quasi-static EMF involves diffusion, analog like heat transfer. Electrical and magnetic field can be analyzed separately. This paper analysis the numerical calculations in ELF-400 software of EMF in distribution substation in shopping center. Analyzing the results it is possible to specify locations exposed to the fields and give useful suggestion to eliminate electromagnetic effect or reduce it on acceptable level within the non-ionizing radiation norms and norms of protection from EMF.Keywords: Electromagnetic Field, Density of Electromagnetic Flow, Place of Proffesional Exposure, Place of Increased Sensitivity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38571269 Comparison of Automated Zone Design Census Output Areas with Existing Output Areas in South Africa
Authors: T. Mokhele, O. Mutanga, F. Ahmed
Abstract:
South Africa is one of the few countries that have stopped using the same Enumeration Areas (EAs) for census enumeration and dissemination. The advantage of this change is that confidentiality issue could be addressed for census dissemination as the design of geographic unit for collection is mainly to ensure that this unit is covered by one enumerator. The objective of this paper was to evaluate the performance of automated zone design output areas against non-zone design developed geographies using the 2001 census data, and 2011 census to some extent, as the main input. The comparison of the Automated Zone-design Tool (AZTool) census output areas with the Small Area Layers (SALs) and SubPlaces based on confidentiality limit, population distribution, and degree of homogeneity, as well as shape compactness, was undertaken. Further, SPSS was employed for validation of the AZTool output results. The results showed that AZTool developed output areas out-perform the existing official SAL and SubPlaces with regard to minimum population threshold, population distribution and to some extent to homogeneity. Therefore, it was concluded that AZTool program provides a new alternative to the creation of optimised census output areas for dissemination of population census data in South Africa.Keywords: AZTool, enumeration areas, small areal layers, South Africa.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7491268 Effect of Progressive Type-I Right Censoring on Bayesian Statistical Inference of Simple Step–Stress Acceleration Life Testing Plan under Weibull Life Distribution
Authors: Saleem Z. Ramadan
Abstract:
This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the Pth percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.
Keywords: Reliability, Accelerated life testing, Cumulative exposure model, Bayesian estimation, Progressive Type-I censoring, Weibull distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21581267 Prediction of Temperature Distribution during Drilling Process Using Artificial Neural Network
Authors: Ali Reza Tahavvor, Saeed Hosseini, Nazli Jowkar, Afshin Karimzadeh Fard
Abstract:
Experimental & numeral study of temperature distribution during milling process, is important in milling quality and tools life aspects. In the present study the milling cross-section temperature is determined by using Artificial Neural Networks (ANN) according to the temperature of certain points of the work piece and the point specifications and the milling rotational speed of the blade. In the present work, at first three-dimensional model of the work piece is provided and then by using the Computational Heat Transfer (CHT) simulations, temperature in different nods of the work piece are specified in steady-state conditions. Results obtained from CHT are used for training and testing the ANN approach. Using reverse engineering and setting the desired x, y, z and the milling rotational speed of the blade as input data to the network, the milling surface temperature determined by neural network is presented as output data. The desired points temperature for different milling blade rotational speed are obtained experimentally and by extrapolation method for the milling surface temperature is obtained and a comparison is performed among the soft programming ANN, CHT results and experimental data and it is observed that ANN soft programming code can be used more efficiently to determine the temperature in a milling process.
Keywords: Milling process, rotational speed, Artificial Neural Networks, temperature.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23311266 Targeting the Life Cycle Stages of the Diamond Back Moth (Plutella xylostella) with Three Different Parasitoid Wasps
Authors: F. O. Faithpraise, J. Idung, C. R. Chatwin, R. C. D. Young, P. Birch
Abstract:
A continuous time model of the interaction between crop insect pests and naturally beneficial pest enemies is created using a set of simultaneous, non-linear, ordinary differential equations incorporating natural death rates based on the Weibull distribution. The crop pest is present in all its life-cycle stages of: egg, larva, pupa and adult. The beneficial insects, parasitoid wasps, may be present in either or all parasitized: eggs, larva and pupa. Population modelling is used to estimate the quantity of the natural pest enemies that should be introduced into the pest infested environment to suppress the pest population density to an economically acceptable level within a prescribed number of days. The results obtained illustrate the effect of different combinations of parasitoid wasps, using the Pascal distribution to estimate their success in parasitizing different pest developmental stages, to deliver pest control to a sustainable level. Effective control, within a prescribed number of days, is established by the deployment of two or all three species of wasps, which partially destroy pest: egg, larvae and pupae stages. The selected scenarios demonstrate effective sustainable control of the pest in less than thirty days.
Keywords: Biological control, Diamondback moth, Parasitoid wasps, Population modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30551265 Application of CFD for Air Flow Analysis underneath Natural Ventilation with Forced Convection in Roof Attic
Authors: C. Nutphuang, S. Chirarattananon, V.D. Hien
Abstract:
In research on natural ventilation, and passive cooling with forced convection, is essential to know how heat flows in a solid object and the pattern of temperature distribution on their surfaces, and eventually how air flows through and convects heat from the surfaces of steel under roof. This paper presents some results from running the computational fluid dynamic program (CFD) by comparison between natural ventilation and forced convection within roof attic that is received directly from solar radiation. The CFD program for modeling air flow inside roof attic has been modified to allow as two cases. First case, the analysis under natural ventilation, is closed area in roof attic and second case, the analysis under forced convection, is opened area in roof attic. These extend of all cases to available predictions of variations such as temperature, pressure, and mass flow rate distributions in each case within roof attic. The comparison shows that this CFD program is an effective model for predicting air flow of temperature and heat transfer coefficient distribution within roof attic. The result shows that forced convection can help to reduce heat transfer through roof attic and an around area of steel core has temperature inner zone lower than natural ventilation type. The different temperature on the steel core of roof attic of two cases was 10-15 oK.Keywords: CFD program, natural ventilation, forcedconvection, heat transfer, air flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22221264 The Transient Reactive Power Regulation Capability of SVC for Large Scale WECS Connected to Distribution Networks
Authors: Y. Ates, A. R. Boynuegri, M. Uzunoglu, A. Karakas
Abstract:
The recent interest in alternative and renewable energy systems results in increased installed capacity ratio of such systems in total energy production of the world. Specifically, Wind Energy Conversion Systems (WECS) draw significant attention among possible alternative energy options, recently. On the contrary of the positive points of penetrating WECS in all over the world in terms of environment protection, energy independence of the countries, etc., there are significant problems to be solved for the grid connection of large scale WECS. The reactive power regulation, voltage variation suppression, etc. can be presented as major issues to be considered in this regard. Thus, this paper evaluates the application of a Static VAr Compensator (SVC) unit for the reactive power regulation and operation continuity of WECS during a fault condition. The system is modeled employing the IEEE 13 node test system. Thus, it is possible to evaluate the system performance with an overall grid simulation model close to real grid systems. The overall simulation model is developed in MATLAB/Simulink/SimPowerSystems® environments and the obtained results effectively match the target of the provided study.Keywords: IEEE 13 bus distribution system, reactive power regulation, static VAr compensator, wind energy conversion system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19801263 Effect of Cooling Rate on base Metals Recovery from Copper Matte Smelting Slags
Authors: N. Tshiongo , R K.K. Mbaya , K Maweja, L.C. Tshabalala
Abstract:
Slag sample from copper smelting operation in a water jacket furnace from DRC plant was used. The study intends to determine the effect of cooling in the extraction of base metals. The cooling methods investigated were water quenching, air cooling and furnace cooling. The latter cooling ways were compared to the original as received slag. It was observed that, the cooling rate of the slag affected the leaching of base metals as it changed the phase distribution in the slag and the base metals distribution within the phases. It was also found that fast cooling of slag prevented crystallization and produced an amorphous phase that encloses the base metals. The amorphous slags from the slag dumps were more leachable in acidic medium (HNO3) which leached 46%Cu, 95% Co, 85% Zn, 92% Pb and 79% Fe with no selectivity at pH0, than in basic medium (NH4OH). The leachability was vice versa for the modified slags by quenching in water which leached 89%Cu with a high selectivity as metal extractions are less than 1% for Co, Zn, Pb and Fe at ambient temperature and pH12. For the crystallized slags, leaching of base metals increased with the increase of temperature from ambient temperature to 60°C and decreased at the higher temperature of 80°C due to the evaporation of the ammonia solution used for basic leaching, the total amounts of base metals that were leached in slow cooled slags were very low compared to the quenched slag samples.Keywords: copper slag, leaching, amorphous, cooling rate
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37651262 Thermodynamic Cycle Analysis for Overall Efficiency Improvement and Temperature Reduction in Gas Turbines
Authors: Jeni A. Popescu, Ionut Porumbel, Valeriu A. Vilag, Cleopatra F. Cuciumita
Abstract:
The paper presents a thermodynamic cycle analysis for three turboshaft engines. The first cycle is a Brayton cycle, describing the evolution of a classical turboshaft, based on the Klimov TV2 engine. The other four cycles aim at approaching an Ericsson cycle, by replacing the Brayton cycle adiabatic expansion in the turbine by quasi-isothermal expansion. The maximum quasi- Ericsson cycles temperature is set to a lower value than the maximum Brayton cycle temperature, equal to the Brayton cycle power turbine inlet temperature, in order to decrease the engine NOx emissions. Also, the power/expansion ratio distribution over the stages of the gas generator turbine is maintained the same. In two of the considered quasi-Ericsson cycles, the efficiencies of the gas generator turbine, as well as the power/expansion ratio distribution over the stages of the gas generator turbine are maintained the same as for the reference case, while for the other two cases, the efficiencies are increased in order to obtain the same shaft power as in the reference case. For the two cases respecting the first condition, both the shaft power and the thermodynamic efficiency of the engine decrease, while for the other two, the power and efficiency are maintained, as a result of assuming new, more efficient gas generator turbines.
Keywords: Combustion, Ericsson, thermodynamic analysis, turbine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24621261 Influence of Outer Corner Radius in Equal Channel Angular Pressing
Authors: Basavaraj V. Patil, Uday Chakkingal, T. S. Prasanna Kumar
Abstract:
Equal Channel Angular Pressing (ECAP) is currently being widely investigated because of its potential to produce ultrafine grained microstructures in metals and alloys. A sound knowledge of the plastic deformation and strain distribution is necessary for understanding the relationships between strain inhomogeneity and die geometry. Considerable research has been reported on finite element analysis of this process, assuming threedimensional plane strain condition. However, the two-dimensional models are not suitable due to the geometry of the dies, especially in cylindrical ones. In the present work, three-dimensional simulation of ECAP process was carried out for six outer corner radii (sharp to 10 mm in steps of 2 mm), with channel angle 105¶Çü▒, for strain hardening aluminium alloy (AA 6101) using ABAQUS/Standard software. Strain inhomogeneity is presented and discussed for all cases. Pattern of strain variation along selected radial lines in the body of the workpiece is presented. It is found from the results that the outer corner has a significant influence on the strain distribution in the body of work-piece. Based on inhomogeneity and average strain criteria, there is an optimum outer corner radius.Keywords: Equal Channel Angular Pressing, Finite Element Analysis, strain inhomogeneity, plastic equivalent strain, ultra fine grain size, aluminium alloy 6101.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22461260 Diagnosing Dangerous Arrhythmia of Patients by Automatic Detecting of QRS Complexes in ECG
Authors: Jia-Rong Yeh, Ai-Hsien Li, Jiann-Shing Shieh, Yen-An Su, Chi-Yu Yang
Abstract:
In this paper, an automatic detecting algorithm for QRS complex detecting was applied for analyzing ECG recordings and five criteria for dangerous arrhythmia diagnosing are applied for a protocol type of automatic arrhythmia diagnosing system. The automatic detecting algorithm applied in this paper detected the distribution of QRS complexes in ECG recordings and related information, such as heart rate and RR interval. In this investigation, twenty sampled ECG recordings of patients with different pathologic conditions were collected for off-line analysis. A combinative application of four digital filters for bettering ECG signals and promoting detecting rate for QRS complex was proposed as pre-processing. Both of hardware filters and digital filters were applied to eliminate different types of noises mixed with ECG recordings. Then, an automatic detecting algorithm of QRS complex was applied for verifying the distribution of QRS complex. Finally, the quantitative clinic criteria for diagnosing arrhythmia were programmed in a practical application for automatic arrhythmia diagnosing as a post-processor. The results of diagnoses by automatic dangerous arrhythmia diagnosing were compared with the results of off-line diagnoses by experienced clinic physicians. The results of comparison showed the application of automatic dangerous arrhythmia diagnosis performed a matching rate of 95% compared with an experienced physician-s diagnoses.Keywords: Signal processing, electrocardiography (ECG), QRS complex, arrhythmia.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15161259 ZBTB17 Gene rs10927875 Polymorphism in Slovak Patients with Dilated Cardiomyopathy
Authors: I. Boroňová, J. Bernasovská, J. Kmec, E. Petrejčíková
Abstract:
Dilated cardiomyopathy (DCM) is a severe cardiovascular disorder characterized by progressive systolic dysfunction due to cardiac chamber dilatation and inefficient myocardial contractility often leading to chronic heart failure. Recently, a genome-wide association studies (GWASs) on DCM indicate that the ZBTB17 gene rs10927875 single nucleotide polymorphism is associated with DCM. The aim of the study was to identify the distribution of ZBTB17 gene rs10927875 polymorphism in 50 Slovak patients with DCM and 80 healthy control subjects using the Custom Taqman®SNP Genotyping assays. Risk factors detected at baseline in each group included age, sex, body mass index, smoking status, diabetes and blood pressure. The mean age of patients with DCM was 52.9±6.3 years; the mean age of individuals in control group was 50.3±8.9 years. The distribution of investigated genotypes of rs10927875 polymorphism within ZBTB17 gene in the cohort of Slovak patients with DCM was as follows: CC (38.8%), CT (55.1%), TT (6.1%), in controls: CC (43.8%), CT (51.2%), TT (5.0%). The risk allele T was more common among the patients with dilated cardiomyopathy than in normal controls (33.7% versus 30.6%). The differences in genotype or allele frequencies of ZBTB17 gene rs10927875 polymorphism were not statistically significant (p=0.6908; p=0.6098). The results of this study suggest that ZBTB17 gene rs10927875 polymorphism may be a risk factor for susceptibility to DCM in Slovak patients with DCM. Studies of numerous files and additional functional investigations are needed to fully understand the roles of genetic associations.
Keywords: Dilated cardiomyopathy, SNP polymorphism, ZBTB17 gene.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21381258 Development and in vitro Characterization of Self-nanoemulsifying Drug Delivery Systems of Valsartan
Authors: P. S. Rajinikanth, Yeoh Suyu, Sanjay Garg
Abstract:
The present study is aim to prepare and evaluate the selfnanoemulsifying drug delivery (SNEDDS) system of a poorly water soluble drug valsartan in order to achieve a better dissolution rate which would further help in enhancing oral bioavailability. The present research work describes a SNEDDS of valsartan using labrafil M 1944 CS, Tween 80 and Transcutol HP. The pseudoternary phase diagrams with presence and absence of drug were plotted to check for the emulsification range and also to evaluate the effect of valsartan on the emulsification behavior of the phases. The mixtures consisting of oil (labrafil M 1944 CS) with surfactant (tween 80), co-surfactant (Transcutol HP) were found to be optimum formulations. Prepared formulations were evaluated for its particle size distribution, nanoemulsifying properties, robustness to dilution, self emulsication time, turbidity measurement, drug content and invitro dissolution. The optimized formulations are further evaluated for heating cooling cycle, centrifugation studies, freeze thaw cycling, particle size distribution and zeta potential were carried out to confirm the stability of the formed SNEDDS formulations. The prepared formulation revealed t a significant improvement in terms of the drug solubility as compared with marketed tablet and pure drug.
Keywords: Self Emulsifying Drug Delivery System, Valsartan, Bioavailability, poorly soluble drug.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26791257 Heat Transfer and Entropy Generation in a Partial Porous Channel Using LTNE and Exothermicity/Endothermicity Features
Authors: Mohsen Torabi, Nader Karimi, Kaili Zhang
Abstract:
This work aims to provide a comprehensive study on the heat transfer and entropy generation rates of a horizontal channel partially filled with a porous medium which experiences internal heat generation or consumption due to exothermic or endothermic chemical reaction. The focus has been given to the local thermal non-equilibrium (LTNE) model. The LTNE approach helps us to deliver more accurate data regarding temperature distribution within the system and accordingly to provide more accurate Nusselt number and entropy generation rates. Darcy-Brinkman model is used for the momentum equations, and constant heat flux is assumed for boundary conditions for both upper and lower surfaces. Analytical solutions have been provided for both velocity and temperature fields. By incorporating the investigated velocity and temperature formulas into the provided fundamental equations for the entropy generation, both local and total entropy generation rates are plotted for a number of cases. Bifurcation phenomena regarding temperature distribution and interface heat flux ratio are observed. It has been found that the exothermicity or endothermicity characteristic of the channel does have a considerable impact on the temperature fields and entropy generation rates.
Keywords: Entropy generation, exothermicity, endothermicity, forced convection, local thermal non-equilibrium, analytical modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8731256 Finite Element Study on Corono-Radicular Restored Premolars
Authors: Sandu L., Topală F., Porojan S.
Abstract:
Restoration of endodontically treated teeth is a common problem in dentistry, related to the fractures occurring in such teeth and to concentration of forces little information regarding variation of basic preparation guidelines in stress distribution has been available. To date, there is still no agreement in the literature about which material or technique can optimally restore endodontically treated teeth. The aim of the present study was to evaluate the influence of the core height and restoration materials on corono-radicular restored upper first premolar. The first step of the study was to achieve 3D models in order to analyze teeth, dowel and core restorations and overlying full ceramic crowns. The FEM model was obtained by importing the solid model into ANSYS finite element analysis software. An occlusal load of 100 N was conducted, and stresses occurring in the restorations, and teeth structures were calculated. Numerical simulations provide a biomechanical explanation for stress distribution in prosthetic restored teeth. Within the limitations of the present study, it was found that the core height has no important influence on the stress generated in coronoradicular restored premolars. It can be drawn that the cervical regions of the teeth and restorations were subjected to the highest stress concentrations.Keywords: 3D models, finite element analysis, dowel and core restoration, full ceramic crown, premolars, structural simulations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28821255 Using Scanning Electron Microscope and Computed Tomography for Concrete Diagnostics of Airfield Pavements
Authors: M. Linek
Abstract:
This article presents the comparison of selected evaluation methods regarding microstructure modification of hardened cement concrete intended for airfield pavements. Basic test results were presented for two pavement quality concrete lots. Analysis included standard concrete used for airfield pavements and modern material solutions based on concrete composite modification. In case of basic grain size distribution of concrete cement CEM I 42,5HSR NA, fine aggregate and coarse aggregate fractions in the form of granite chippings, water and admixtures were considered. In case of grain size distribution of modified concrete, the use of modern modifier as substitute of fine aggregate was suggested. Modification influence on internal concrete structure parameters using scanning electron microscope was defined. Obtained images were compared to the results obtained using computed tomography. Opportunity to use this type of equipment for internal concrete structure diagnostics and an attempt of its parameters evaluation was presented. Obtained test results enabled to reach a conclusion that both methods can be applied for pavement quality concrete diagnostics, with particular purpose of airfield pavements.Keywords: Scanning electron microscope, computed tomography, cement concrete, airfield pavements.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11131254 Material Density Mapping on Deformable 3D Models of Human Organs
Authors: Petru Manescu, Joseph Azencot, Michael Beuve, Hamid Ladjal, Jacques Saade, Jean-Michel Morreau, Philippe Giraud, Behzad Shariat
Abstract:
Organ motion, especially respiratory motion, is a technical challenge to radiation therapy planning and dosimetry. This motion induces displacements and deformation of the organ tissues within the irradiated region which need to be taken into account when simulating dose distribution during treatment. Finite element modeling (FEM) can provide a great insight into the mechanical behavior of the organs, since they are based on the biomechanical material properties, complex geometry of organs, and anatomical boundary conditions. In this paper we present an original approach that offers the possibility to combine image-based biomechanical models with particle transport simulations. We propose a new method to map material density information issued from CT images to deformable tetrahedral meshes. Based on the principle of mass conservation our method can correlate density variation of organ tissues with geometrical deformations during the different phases of the respiratory cycle. The first results are particularly encouraging, as local error quantification of density mapping on organ geometry and density variation with organ motion are performed to evaluate and validate our approach.
Keywords: Biomechanical simulation, dose distribution, image guided radiation therapy, organ motion, tetrahedral mesh, 4D-CT.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30071253 Probabilistic Method of Wind Generation Placement for Congestion Management
Authors: S. Z. Moussavi, A. Badri, F. Rastegar Kashkooli
Abstract:
Wind farms (WFs) with high level of penetration are being established in power systems worldwide more rapidly than other renewable resources. The Independent System Operator (ISO), as a policy maker, should propose appropriate places for WF installation in order to maximize the benefits for the investors. There is also a possibility of congestion relief using the new installation of WFs which should be taken into account by the ISO when proposing the locations for WF installation. In this context, efficient wind farm (WF) placement method is proposed in order to reduce burdens on congested lines. Since the wind speed is a random variable and load forecasts also contain uncertainties, probabilistic approaches are used for this type of study. AC probabilistic optimal power flow (P-OPF) is formulated and solved using Monte Carlo Simulations (MCS). In order to reduce computation time, point estimate methods (PEM) are introduced as efficient alternative for time-demanding MCS. Subsequently, WF optimal placement is determined using generation shift distribution factors (GSDF) considering a new parameter entitled, wind availability factor (WAF). In order to obtain more realistic results, N-1 contingency analysis is employed to find the optimal size of WF, by means of line outage distribution factors (LODF). The IEEE 30-bus test system is used to show and compare the accuracy of proposed methodology.Keywords: Probabilistic optimal power flow, Wind power, Pointestimate methods, Congestion management
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18861252 Mathematical Study for Traffic Flow and Traffic Density in Kigali Roads
Authors: Kayijuka Idrissa
Abstract:
This work investigates a mathematical study for traffic flow and traffic density in Kigali city roads and the data collected from the national police of Rwanda in 2012. While working on this topic, some mathematical models were used in order to analyze and compare traffic variables. This work has been carried out on Kigali roads specifically at roundabouts from Kigali Business Center (KBC) to Prince House as our study sites. In this project, we used some mathematical tools to analyze the data collected and to understand the relationship between traffic variables. We applied the Poisson distribution method to analyze and to know the number of accidents occurred in this section of the road which is from KBC to Prince House. The results show that the accidents that occurred in 2012 were at very high rates due to the fact that this section has a very narrow single lane on each side which leads to high congestion of vehicles, and consequently, accidents occur very frequently. Using the data of speeds and densities collected from this section of road, we found that the increment of the density results in a decrement of the speed of the vehicle. At the point where the density is equal to the jam density the speed becomes zero. The approach is promising in capturing sudden changes on flow patterns and is open to be utilized in a series of intelligent management strategies and especially in noncurrent congestion effect detection and control.
Keywords: Statistical methods, Poisson distribution, car moving techniques, traffic flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18181251 A Combined Approach of a Sequential Life Testing and an Accelerated Life Testing Applied to a Low-Alloy High Strength Steel Component
Authors: D. I. De Souza, D. R. Fonseca, G. P. Azevedo
Abstract:
Sometimes the amount of time available for testing could be considerably less than the expected lifetime of the component. To overcome such a problem, there is the accelerated life-testing alternative aimed at forcing components to fail by testing them at much higher-than-intended application conditions. These models are known as acceleration models. One possible way to translate test results obtained under accelerated conditions to normal using conditions could be through the application of the “Maxwell Distribution Law.” In this paper we will apply a combined approach of a sequential life testing and an accelerated life testing to a low alloy high-strength steel component used in the construction of overpasses in Brazil. The underlying sampling distribution will be three-parameter Inverse Weibull model. To estimate the three parameters of the Inverse Weibull model we will use a maximum likelihood approach for censored failure data. We will be assuming a linear acceleration condition. To evaluate the accuracy (significance) of the parameter values obtained under normal conditions for the underlying Inverse Weibull model we will apply to the expected normal failure times a sequential life testing using a truncation mechanism. An example will illustrate the application of this procedure.
Keywords: Sequential Life Testing, Accelerated Life Testing, Underlying Three-Parameter Weibull Model, Maximum Likelihood Approach, Hypothesis Testing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16381250 Simulation of Laser Structuring by Three Dimensional Heat Transfer Model
Authors: Bassim Bachy, Joerg Franke
Abstract:
In this study, a three dimensional numerical heat transfer model has been used to simulate the laser structuring of polymer substrate material in the Three-Dimensional Molded Interconnect Device (3D MID) which is used in the advanced multifunctional applications. A finite element method (FEM) transient thermal analysis is performed using APDL (ANSYS Parametric Design Language) provided by ANSYS. In this model, the effect of surface heat source was modeled with Gaussian distribution, also the effect of the mixed boundary conditions which consist of convection and radiation heat transfers have been considered in this analysis. The model provides a full description of the temperature distribution, as well as calculates the depth and the width of the groove upon material removal at different set of laser parameters such as laser power and laser speed. This study also includes the experimental procedure to study the effect of laser parameters on the depth and width of the removal groove metal as verification to the modeled results. Good agreement between the experimental and the model results is achieved for a wide range of laser powers. It is found that the quality of the laser structure process is affected by the laser scan speed and laser power. For a high laser structured quality, it is suggested to use laser with high speed and moderate to high laser power.
Keywords: Laser Structuring, Simulation, Finite element analysis, Thermal modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 43431249 Optimization of Energy Conservation Potential for VAV Air Conditioning System using Fuzzy based Genetic Algorithm
Authors: R. Parameshwaran, R. Karunakaran, S. Iniyan, Anand A. Samuel
Abstract:
The objective of this study is to present the test results of variable air volume (VAV) air conditioning system optimized by two objective genetic algorithm (GA). The objective functions are energy savings and thermal comfort. The optimal set points for fuzzy logic controller (FLC) are the supply air temperature (Ts), the supply duct static pressure (Ps), the chilled water temperature (Tw), and zone temperature (Tz) that is taken as the problem variables. Supply airflow rate and chilled water flow rate are considered to be the constraints. The optimal set point values are obtained from GA process and assigned into fuzzy logic controller (FLC) in order to conserve energy and maintain thermal comfort in real time VAV air conditioning system. A VAV air conditioning system with FLC installed in a software laboratory has been taken for the purpose of energy analysis. The total energy saving obtained in VAV GA optimization system with FLC compared with constant air volume (CAV) system is expected to achieve 31.5%. The optimal duct static pressure obtained through Genetic fuzzy methodology attributes to better air distribution by delivering the optimal quantity of supply air to the conditioned space. This combination enhanced the advantages of uniform air distribution, thermal comfort and improved energy savings potential.Keywords: Energy savings, fuzzy logic, Genetic algorithm, Thermal Comfort
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32091248 Standard Deviation of Mean and Variance of Rows and Columns of Images for CBIR
Authors: H. B. Kekre, Kavita Patil
Abstract:
This paper describes a novel and effective approach to content-based image retrieval (CBIR) that represents each image in the database by a vector of feature values called “Standard deviation of mean vectors of color distribution of rows and columns of images for CBIR". In many areas of commerce, government, academia, and hospitals, large collections of digital images are being created. This paper describes the approach that uses contents as feature vector for retrieval of similar images. There are several classes of features that are used to specify queries: colour, texture, shape, spatial layout. Colour features are often easily obtained directly from the pixel intensities. In this paper feature extraction is done for the texture descriptor that is 'variance' and 'Variance of Variances'. First standard deviation of each row and column mean is calculated for R, G, and B planes. These six values are obtained for one image which acts as a feature vector. Secondly we calculate variance of the row and column of R, G and B planes of an image. Then six standard deviations of these variance sequences are calculated to form a feature vector of dimension six. We applied our approach to a database of 300 BMP images. We have determined the capability of automatic indexing by analyzing image content: color and texture as features and by applying a similarity measure Euclidean distance.
Keywords: Standard deviation Image retrieval, color distribution, Variance, Variance of Variance, Euclidean distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37451247 Low-Cost Mechatronic Design of an Omnidirectional Mobile Robot
Authors: S. Cobos-Guzman
Abstract:
This paper presents the results of a mechatronic design based on a 4-wheel omnidirectional mobile robot that can be used in indoor logistic applications. The low-level control has been selected using two open-source hardware (Raspberry Pi 3 Model B+ and Arduino Mega 2560) that control four industrial motors, four ultrasound sensors, four optical encoders, a vision system of two cameras, and a Hokuyo URG-04LX-UG01 laser scanner. Moreover, the system is powered with a lithium battery that can supply 24 V DC and a maximum current-hour of 20Ah.The Robot Operating System (ROS) has been implemented in the Raspberry Pi and the performance is evaluated with the selection of the sensors and hardware selected. The mechatronic system is evaluated and proposed safe modes of power distribution for controlling all the electronic devices based on different tests. Therefore, based on different performance results, some recommendations are indicated for using the Raspberry Pi and Arduino in terms of power, communication, and distribution of control for different devices. According to these recommendations, the selection of sensors is distributed in both real-time controllers (Arduino and Raspberry Pi). On the other hand, the drivers of the cameras have been implemented in Linux and a python program has been implemented to access the cameras. These cameras will be used for implementing a deep learning algorithm to recognize people and objects. In this way, the level of intelligence can be increased in combination with the maps that can be obtained from the laser scanner.
Keywords: Autonomous, indoor robot, mechatronic, omnidirectional robot.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 585