Search results for: particle size distribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3899

Search results for: particle size distribution

2759 Improved MARS Ciphering Using a Metamorphic-Enhanced Function

Authors: Moataz M. Naguib, Hatem Khater, A. Baith Mohamed

Abstract:

MARS is a shared-key (symmetric) block cipher algorithm supporting 128-bit block size and a variable key size of between 128 and 448 bits. MARS has a several rounds of cryptographic core that is designed to take advantage of the powerful results for improving security/performance tradeoff over existing ciphers. In this work, a new function added to improve the ciphering process it is called, Meta-Morphic function. This function use XOR, Rotating, Inverting and No-Operation logical operations before and after encryption process. The aim of these operations is to improve MARS cipher process and makes a high confusion criterion for the Ciphertext.

Keywords: AES, MARS, Metamorphic, Cryptography, Block Cipher.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2020
2758 ATC in Competitive Electricity Market Using TCSC

Authors: S. K. Gupta, Richa Bansal

Abstract:

In a deregulated power system structure, power producers and customers share a common transmission network for wheeling power from the point of generation to the point of consumption. All parties in this open access environment may try to purchase the energy from the cheaper source for greater profit margins, which may lead to overloading and congestion of certain corridors of the transmission network. This may result in violation of line flow, voltage and stability limits and thereby undermine the system security. Utilities therefore need to determine adequately their available transfer capability (ATC) to ensure that system reliability is maintained while serving a wide range of bilateral and multilateral transactions. This paper presents power transfer distribution factor based on AC load flow for the determination and enhancement of ATC. The study has been carried out for IEEE 24 bus Reliability Test System.

Keywords: Available Transfer Capability, FACTS devices, Power Transfer Distribution Factors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2336
2757 Strain Based Evaluation of Dents in Pressurized Pipes

Authors: Maziar Ramezani, Thomas Neitzert

Abstract:

A dent is a gross distortion of the pipe cross-section. Dent depth is defined as the maximum reduction in the diameter of the pipe compared to the original diameter. Pipeline dent finite element (FE) simulation and theoretical analysis are conducted in this paper to develop an understanding of the geometric characteristics and strain distribution in the pressurized dented pipe. Based on the results, the magnitude of the denting force increases significantly with increasing the internal pressure, and the maximum circumferential and longitudinal strains increase by increasing the internal pressure and the dent depth. The results can be used for characterizing dents and ranking their risks to the integrity of a pipeline.

Keywords: dented steel pipelines, Finite element model, Internal pressure, Strain distribution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5461
2756 The Key Role of the Steroidal Hormones in the Pattern Distribution of the Epiphyseal Structure in Rabbit

Authors: Fatahian Dehkordi R.F, Parchami A.

Abstract:

Steroidal hormones with the efficient changes on the epiphyseal growth plate may influence tissue structure properties. Presents paper to investigate the effects of gonadectomy in the pattern distribution of the epiphyseal structure. Fifteen adult female New Zealand white rabbits were separated into three groups. One group was intact and others groups were selected for surgical operation. From these two groups, one group carried out steroidal administration. The results obtained showed that there is no statistically difference in the mean diameter of the growth plate cells between all three groups. The maximum value of the cartilage cells were allocated to the gonadectomized group and the minimum number were observed in Hormonal induced group significantly. Growth plate height was significantly greater in gonadectomized group than in two other groups.

Keywords: Steroidal hormones, Ovariectomy, Rabbit, Epiphyseal structure

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1254
2755 Dynamics of Phytoplankton Blooms in the Baltic Sea – Numerical Simulations

Authors: L. Dzierzbicka-Głowacka, M. Janecki

Abstract:

Dynamic of phytoplankton blooms in the Baltic Sea has been analyzed applying the numerical ecosystem model 3D CEMBS. The model consists of the hydrodynamic model (POP, version 2.1) and the ice model (CICE, version 4.0), which are imposed by the atmospheric data model (DATM7). The 3D model has an ecosystem module, activated in 2012 in the operational mode. The ecosystem model consists of 11 main variables: biomass of small-size phytoplankton and large-size phytoplankton and cyanobacteria, zooplankton biomass, dissolved and molecular detritus, dissolved oxygen concentration, as well as concentrations of nutrients, including: nitrates, ammonia, phosphates and silicates. The 3D-CEMBS model is an effective tool for solving problems related to phytoplankton blooms dynamic in the Baltic Sea

Keywords: Ecosystem model, phytoplankton, Baltic Sea

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2649
2754 A Parametric Study on Deoiling Hydrocyclones Flow Field

Authors: Maysam Saidi, Reza Maddahian, Bijan Farhanieh

Abstract:

Hydrocyclones flow field study is conducted by performing a parametric study. Effect of cone angle on deoiling hydrocyclones flow behaviour is studied in this research. Flow field of hydrocyclone is obtained by three-dimensional simulations with OpenFOAM code. Because of anisotropic behaviour of flow inside hydrocyclones LES is a suitable method to predict the flow field since it resolves large scales and model isotropic small scales. Large eddy simulation is used to predict the flow behavior of three different cone angles. Differences in tangential velocity and pressure distribution are reported in some figures.

Keywords: Deoiling hydrocyclones, Flow field, Hydrocyclone cone angle, Large Eddy Simulation, Pressure distribution

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2381
2753 Statistical Analysis for Overdispersed Medical Count Data

Authors: Y. N. Phang, E. F. Loh

Abstract:

Many researchers have suggested the use of zero inflated Poisson (ZIP) and zero inflated negative binomial (ZINB) models in modeling overdispersed medical count data with extra variations caused by extra zeros and unobserved heterogeneity. The studies indicate that ZIP and ZINB always provide better fit than using the normal Poisson and negative binomial models in modeling overdispersed medical count data. In this study, we proposed the use of Zero Inflated Inverse Trinomial (ZIIT), Zero Inflated Poisson Inverse Gaussian (ZIPIG) and zero inflated strict arcsine models in modeling overdispered medical count data. These proposed models are not widely used by many researchers especially in the medical field. The results show that these three suggested models can serve as alternative models in modeling overdispersed medical count data. This is supported by the application of these suggested models to a real life medical data set. Inverse trinomial, Poisson inverse Gaussian and strict arcsine are discrete distributions with cubic variance function of mean. Therefore, ZIIT, ZIPIG and ZISA are able to accommodate data with excess zeros and very heavy tailed. They are recommended to be used in modeling overdispersed medical count data when ZIP and ZINB are inadequate.

Keywords: Zero inflated, inverse trinomial distribution, Poisson inverse Gaussian distribution, strict arcsine distribution, Pearson’s goodness of fit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3287
2752 Developing Optical Sensors with Application of Cancer Detection by Elastic Light Scattering Spectroscopy

Authors: May Fadheel Estephan, Richard Perks

Abstract:

Cancer is a serious health concern that affects millions of people worldwide. Early detection and treatment are essential for improving patient outcomes. However, current methods for cancer detection have limitations, such as low sensitivity and specificity. The aim of this study was to develop an optical sensor for cancer detection using elastic light scattering spectroscopy (ELSS). ELSS is a non-invasive optical technique that can be used to characterize the size and concentration of particles in a solution. An optical probe was fabricated with a 100-μm-diameter core and a 132-μm centre-to-centre separation. The probe was used to measure the ELSS spectra of polystyrene spheres with diameters of 2 μm, 0.8 μm, and 0.413 μm. The spectra were then analysed to determine the size and concentration of the spheres. The results showed that the optical probe was able to differentiate between the three different sizes of polystyrene spheres. The probe was also able to detect the presence of polystyrene spheres in suspension concentrations as low as 0.01%. The results of this study demonstrate the potential of ELSS for cancer detection. ELSS is a non-invasive technique that can be used to characterize the size and concentration of cells in a tissue sample. This information can be used to identify cancer cells and assess the stage of the disease. The data for this study were collected by measuring the ELSS spectra of polystyrene spheres with different diameters. The spectra were collected using a spectrometer and a computer. The ELSS spectra were analysed using a software program to determine the size and concentration of the spheres. The software program used a mathematical algorithm to fit the spectra to a theoretical model. The question addressed by this study was whether ELSS could be used to detect cancer cells. The results of the study showed that ELSS could be used to differentiate between different sizes of cells, suggesting that it could be used to detect cancer cells. The findings of this research show the utility of ELSS in the early identification of cancer. ELSS is a non-invasive method for characterizing the number and size of cells in a tissue sample. To determine cancer cells and determine the disease's stage, this information can be employed. Further research is needed to evaluate the clinical performance of ELSS for cancer detection.

Keywords: Elastic Light Scattering Spectroscopy, Polystyrene spheres in suspension, optical probe, fibre optics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 63
2751 A Location Routing Model for the Logistic System in the Mining Collection Centers of the Northern Region of Boyacá-Colombia

Authors: Erika Ruíz, Luis Amaya, Diego Carreño

Abstract:

The main objective of this study is to design a mathematical model for the logistics of mining collection centers in the northern region of the department of Boyacá (Colombia), determining the structure that facilitates the flow of products along the supply chain. In order to achieve this, it is necessary to define a suitable design of the distribution network, taking into account the products, customer’s characteristics and the availability of information. Likewise, some other aspects must be defined, such as number and capacity of collection centers to establish, routes that must be taken to deliver products to the customers, among others. This research will use one of the operation research problems, which is used in the design of distribution networks known as Location Routing Problem (LRP).

Keywords: Location routing problem, logistic, mining collection, model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 760
2750 Pragati Node Popularity (PNP) Approach to Identify Congestion Hot Spots in MPLS

Authors: E. Ramaraj, A. Padmapriya

Abstract:

In large Internet backbones, Service Providers typically have to explicitly manage the traffic flows in order to optimize the use of network resources. This process is often referred to as Traffic Engineering (TE). Common objectives of traffic engineering include balance traffic distribution across the network and avoiding congestion hot spots. Raj P H and SVK Raja designed the Bayesian network approach to identify congestion hors pots in MPLS. In this approach for every node in the network the Conditional Probability Distribution (CPD) is specified. Based on the CPD the congestion hot spots are identified. Then the traffic can be distributed so that no link in the network is either over utilized or under utilized. Although the Bayesian network approach has been implemented in operational networks, it has a number of well known scaling issues. This paper proposes a new approach, which we call the Pragati (means Progress) Node Popularity (PNP) approach to identify the congestion hot spots with the network topology alone. In the new Pragati Node Popularity approach, IP routing runs natively over the physical topology rather than depending on the CPD of each node as in Bayesian network. We first illustrate our approach with a simple network, then present a formal analysis of the Pragati Node Popularity approach. Our PNP approach shows that for any given network of Bayesian approach, it exactly identifies the same result with minimum efforts. We further extend the result to a more generic one: for any network topology and even though the network is loopy. A theoretical insight of our result is that the optimal routing is always shortest path routing with respect to some considerations of hot spots in the networks.

Keywords: Conditional Probability Distribution, Congestion hotspots, Operational Networks, Traffic Engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1953
2749 Numerical Investigation of Nozzle Shape Effect on Shock Wave in Natural Gas Processing

Authors: Esam I. Jassim, Mohamed M. Awad

Abstract:

Natural gas flow contains undesirable solid particles, liquid condensation, and/or oil droplets and requires reliable removing equipment to perform filtration. Recent natural gas processing applications are demanded compactness and reliability of process equipment. Since conventional means are sophisticated in design, poor in efficiency, and continue lacking robust, a supersonic nozzle has been introduced as an alternative means to meet such demands. A 3-D Convergent-Divergent Nozzle is simulated using commercial Code for pressure ratio (NPR) varies from 1.2 to 2. Six different shapes of nozzle are numerically examined to illustrate the position of shock-wave as such spot could be considered as a benchmark of particle separation. Rectangle, triangle, circular, elliptical, pentagon, and hexagon nozzles are simulated using Fluent Code with all have same cross-sectional area. The simple one-dimensional inviscid theory does not describe the actual features of fluid flow precisely as it ignores the impact of nozzle configuration on the flow properties. CFD Simulation results, however, show that nozzle geometry influences the flow structures including location of shock wave. The CFD analysis predicts shock appearance when p01/pa>1.2 for almost all geometry and locates at the lower area ratio (Ae/At). Simulation results showed that shock wave in Elliptical nozzle has the farthest distance from the throat among the others at relatively small NPR. As NPR increases, hexagon would be the farthest. The numerical result is compared with available experimental data and has shown good agreement in terms of shock location and flow structure.

Keywords: CFD, Particle Separation, Shock wave, Supersonic Nozzle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3226
2748 Retail Strategy to Reduce Waste Keeping High Profit Utilizing Taylor's Law in Point-of-Sales Data

Authors: Gen Sakoda, Hideki Takayasu, Misako Takayasu

Abstract:

Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant  of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual  value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.

Keywords: Food waste reduction, particle filter, point of sales, sustainable development goals, Taylor's Law, time series analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 842
2747 The Dividend Payments for General Claim Size Distributions under Interest Rate

Authors: Li-Li Li, Jinghai Feng, Lixin Song

Abstract:

This paper evaluates the dividend payments for general claim size distributions in the presence of a dividend barrier. The surplus of a company is modeled using the classical risk process perturbed by diffusion, and in addition, it is assumed to accrue interest at a constant rate. After presenting the integro-differential equation with initial conditions that dividend payments satisfies, the paper derives a useful expression of the dividend payments by employing the theory of Volterra equation. Furthermore, the optimal value of dividend barrier is found. Finally, numerical examples illustrate the optimality of optimal dividend barrier and the effects of parameters on dividend payments.

Keywords: Dividend payout, Integro-differential equation, Jumpdiffusion model, Volterra equation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1778
2746 Design and Analysis of an Electro Thermally Symmetrical Actuated Microgripper

Authors: Sh. Foroughi, V. Karamzadeh, M. Packirisamy

Abstract:

This paper presents design and analysis of an electrothermally symmetrical actuated microgripper applicable for performing micro assembly or biological cell manipulation. Integration of micro-optics with microdevice leads to achieve extremely precise control over the operation of the device. Geometry, material, actuation, control, accuracy in measurement and temperature distribution are important factors which have to be taken into account for designing the efficient microgripper device. In this work, analyses of four different geometries are performed by means of COMSOL Multiphysics 5.2 with implementing Finite Element Methods. Then, temperature distribution along the fingertip, displacement of gripper site as well as optical efficiency vs. displacement and electrical potential are illustrated. Results show in addition to the industrial application of this device, the usage of that as a cell manipulator is possible.

Keywords: Electro thermal actuator, MEMS, Microgripper, MOEMS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 683
2745 Slime Mould Optimization Algorithms for Optimal Distributed Generation Integration in Distribution Electrical Network

Authors: F. Fissou Amigue, S. Ndjakomo Essiane, S. Pérabi Ngoffé, G. Abessolo Ondoa, G. Mengata Mengounou, T. P. Nna Nna

Abstract:

This document proposes a method for determining the optimal point of integration of distributed generation (DG) in distribution grid. Slime mould optimization is applied to determine best node in case of one and two injection point. Problem has been modeled as an optimization problem where the objective is to minimize joule loses and main constraint is to regulate voltage in each point. The proposed method has been implemented in MATLAB and applied in IEEE network 33 and 69 nodes. Comparing results obtained with other algorithms showed that slime mould optimization algorithms (SMOA) have the best reduction of power losses and good amelioration of voltage profile.

Keywords: Optimization, distributed generation, integration, slime mould algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 599
2744 Revision of Genus Polygonum L. s.l. in Flora of Armenia

Authors: Hasmik P. Ter-Voskanyan

Abstract:

The account of genus Polygonum L. in "Flora of Armenia" was made more than five decades ago. After that many expeditions have been carried out in different regions of Armenia and a huge herbarium material has been collected. The genus included 5 sections with 20 species. Since then many authors accepted the sections as separate genera on the basis of anatomical, morphological, palynological and molecular data. According to the above mentioned it became clear, that the taxonomy of Armenian representatives of Polygonum s. l. also needs revision. New literature data and our investigations of live and herbarium material (ERE, LE) with specification of the morphological characters, distribution, ecology, flowering and fruiting terms brought us to conclusion, that genus Polygonum s. l. has to be split into 5 different genera (Aconogonon (Meisn.) Reichenb., Bistorta (L.) Scop., Fallopia Adans., Persicaria Mill., Polygonum L. s. s.). The number of species has been reduced to 16 species. For each genus new determination keys has been created. 

Keywords: Aconogonon (Meisn.) Reichenb., Bistorta (L.) Scop., Fallopia Adans., Persicaria Mill., Polygonum L. s. s., Flora of Armenia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2518
2743 A New Muscle Architecture Model with Non-Uniform Distribution of Muscle Fiber Types

Authors: Javier Navallas, Armando Malanda, Luis Gila, Javier Rodriguez, Ignacio Rodriguez

Abstract:

According to previous studies, some muscles present a non-homogeneous spatial distribution of its muscle fiber types and motor unit types. However, available muscle models only deal with muscles with homogeneous distributions. In this paper, a new architecture muscle model is proposed to permit the construction of non-uniform distributions of muscle fibers within the muscle cross section. The idea behind is the use of a motor unit placement algorithm that controls the spatial overlapping of the motor unit territories of each motor unit type. Results show the capabilities of the new algorithm to reproduce arbitrary muscle fiber type distributions.

Keywords: muscle model, muscle architecture, motor unit, EMG simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1569
2742 Stress Variation of Underground Building Structure during Top-Down Construction

Authors: Soo-yeon Seo, Seol-ki Kim, Su-jin Jung

Abstract:

In the construction of a building, it is necessary to minimize construction period and secure enough work space for stacking of materials during the construction especially in city area. In this manner, various top-down construction methods have been developed and widely used in Korea. This paper investigates the stress variation of underground structure of a building constructed by using SPS (Strut as Permanent System) known as a top-down method in Korea through an analytical approach. Various types of earth pressure distribution related to ground condition were considered in the structural analysis of an example structure at each step of the excavation. From the analysis, the most high member force acting on beams was found when the ground type was medium sandy soil and a stress concentration was found in corner area.

Keywords: Construction of building, top-down construction method, earth pressure distribution, member force, stress concentration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1695
2741 FEA for Teeth Preparations Marginal Geometry

Authors: L. Sandu, F. Topalâ, S. Porojan

Abstract:

Knowledge of factors, which influence stress and its distribution, is of key importance to the successful production of durable restorations. One of this is the marginal geometry. The objective of this study was to evaluate, by finite element analysis (FEA), the influence of different marginal designs on the stress distribution in teeth prepared for cast metal crowns. Five margin designs were taken into consideration: shoulderless, chamfer, shoulder, sloped shoulder and shoulder with bevel. For each kind of preparation three dimensional finite element analyses were initiated. Maximal equivalent stresses were calculated and stress patterns were represented in order to compare the marginal designs. Within the limitation of this study, the shoulder and beveled shoulder margin preparations of the teeth are preferred for cast metal crowns from biomechanical point of view.

Keywords: finite element, marginal geometry, metal crown

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2029
2740 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: G. Candel, D. Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embedding. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic, and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n2) to O(n2/k), and the memory requirement from n2 to 2(n/k)2 which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: Concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 458
2739 Uniformity of Dose Distribution in Radiation Fields Surrounding the Spine using Film Dosimetry and Comparison with 3D Treatment Planning Software

Authors: Sadegh Masoudi , Vahid Fayaz , Hassan Zandi, Asieh Tavakol

Abstract:

The overall penumbra is usually defined as the distance, p20–80, separating the 20% and 80% of the dose on the beam axis at the depth of interest. This overall penumbra accounts also for the fact that some photons emitted by the distal parts of the source are only partially attenuated by the collimator. Medulloblastoma is the most common type of childhood brain tumor and often spreads to the spine. Current guidelines call for surgery to remove as much of the tumor as possible, followed by radiation of the brain and spinal cord, and finally treatment with chemotherapy. The purpose of this paper was to present results on an Uniformity of dose distribution in radiation fields surrounding the spine using film dosimetry and comparison with 3D treatment planning software.

Keywords: Absorbed Dose , Spine , Radiotherapy, 3D treatment planning software

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1818
2738 Blind Source Separation based on the Estimation for the Number of the Blind Sources under a Dynamic Acoustic Environment

Authors: Takaaki Ishibashi

Abstract:

Independent component analysis can estimate unknown source signals from their mixtures under the assumption that the source signals are statistically independent. However, in a real environment, the separation performance is often deteriorated because the number of the source signals is different from that of the sensors. In this paper, we propose an estimation method for the number of the sources based on the joint distribution of the observed signals under two-sensor configuration. From several simulation results, it is found that the number of the sources is coincident to that of peaks in the histogram of the distribution. The proposed method can estimate the number of the sources even if it is larger than that of the observed signals. The proposed methods have been verified by several experiments.

Keywords: blind source separation, independent component analysys, estimation for the number of the blind sources, voice activity detection, target extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1281
2737 Optimal Maintenance and Improvement Policies in Water Distribution System: Markov Decision Process Approach

Authors: Jong Woo Kim, Go Bong Choi, Sang Hwan Son, Dae Shik Kim, Jung Chul Suh, Jong Min Lee

Abstract:

The Markov decision process (MDP) based methodology is implemented in order to establish the optimal schedule which minimizes the cost. Formulation of MDP problem is presented using the information about the current state of pipe, improvement cost, failure cost and pipe deterioration model. The objective function and detailed algorithm of dynamic programming (DP) are modified due to the difficulty of implementing the conventional DP approaches. The optimal schedule derived from suggested model is compared to several policies via Monte Carlo simulation. Validity of the solution and improvement in computational time are proved.

Keywords: Markov decision processes, Dynamic Programming, Monte Carlo simulation, Periodic replacement, Weibull distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2791
2736 Influence of IMV on Space Station

Authors: Fu Shiming, Pei Yifei

Abstract:

To study the impact of the inter-module ventilation (IMV) on the space station, the Computational Fluid Dynamic (CFD) model under the influence of IMV, the mathematical model, boundary conditions and calculation method are established and determined to analyze the influence of IMV on cabin air flow characteristics and velocity distribution firstly; and then an integrated overall thermal mathematical model of the space station is used to consider the impact of IMV on thermal management. The results show that: the IMV has a significant influence on the cabin air flow, the flowrate of IMV within a certain range can effectively improve the air velocity distribution in cabin, if too much may lead to its deterioration; IMV can affect the heat deployment of the different modules in space station, thus affecting its thermal management, the use of IMV can effectively maintain the temperature levels of the different modules and help the space station to dissipate the waste heat.

Keywords: CFD, Environment control and life support, Space station, Thermal management, Thermal mathematical model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2031
2735 Experimental Investigation of On-Body Channel Modelling at 2.45 GHz

Authors: Hasliza A. Rahim, Fareq Malek, Nur A. M. Affendi, Azuwa Ali, Norshafinash Saudin, Latifah Mohamed

Abstract:

This paper presents the experimental investigation of on-body channel fading at 2.45 GHz considering two effects of the user body movement; stationary and mobile. A pair of body-worn antennas was utilized in this measurement campaign. A statistical analysis was performed by comparing the measured on-body path loss to five well-known distributions; lognormal, normal, Nakagami, Weibull and Rayleigh. The results showed that the average path loss of moving arm varied higher than the path loss in sitting position for upper-arm-to-left-chest link, up to 3.5 dB. The analysis also concluded that the Nakagami distribution provided the best fit for most of on-body static link path loss in standing still and sitting position, while the arm movement can be best described by log-normal distribution.

Keywords: On-Body channel communications, fading characteristics, statistical model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520
2734 A Study of Dose Distribution and Image Quality under an Automatic Tube Current Modulation (ATCM) System for a Toshiba Aquilion 64 CT Scanner Using a New Design of Phantom

Authors: S. Sookpeng, C. J. Martin, D. J. Gentle

Abstract:

Automatic tube current modulation (ATCM) systems are available for all CT manufacturers and are used for the majority of patients. Understanding how the systems work and their influence on patient dose and image quality is important for CT users, in order to gain the most effective use of the systems. In the present study, a new phantom was used for evaluating dose distribution and image quality under the ATCM operation for the Toshiba Aquilion 64 CT scanner using different ATCM options and a fixed mAs technique. A routine chest, abdomen and pelvis (CAP) protocol was selected for study and Gafchromic film was used to measure entrance surface dose (ESD), peripheral dose and central axis dose in the phantom. The results show the dose reductions achievable with various ATCM options, in relation with the target noise. The doses and image noise distribution were more uniform when the ATCM system was implemented compared with the fixed mAs technique. The lower limit set for the tube current will affect the modulations especially for the lower dose option. This limit prevented the tube current being reduced further and therefore the lower dose ATCM setting resembled a fixed mAs technique. Selection of a lower tube current limit is likely to reduce doses for smaller patients in scans of chest and neck regions.

Keywords: Computed Tomography (CT), Automatic Tube Current Modulation (ATCM), Automatic Exposure Control (AEC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2606
2733 An efficient Activity Network Reduction Algorithm based on the Label Correcting Tracing Algorithm

Authors: Weng Ming Chu

Abstract:

When faced with stochastic networks with an uncertain duration for their activities, the securing of network completion time becomes problematical, not only because of the non-identical pdf of duration for each node, but also because of the interdependence of network paths. As evidenced by Adlakha & Kulkarni [1], many methods and algorithms have been put forward in attempt to resolve this issue, but most have encountered this same large-size network problem. Therefore, in this research, we focus on network reduction through a Series/Parallel combined mechanism. Our suggested algorithm, named the Activity Network Reduction Algorithm (ANRA), can efficiently transfer a large-size network into an S/P Irreducible Network (SPIN). SPIN can enhance stochastic network analysis, as well as serve as the judgment of symmetry for the Graph Theory.

Keywords: Series/Parallel network, Stochastic network, Network reduction, Interdictive Graph, Complexity Index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1361
2732 Studies of Rule Induction by STRIM from the Decision Table with Contaminated Attribute Values from Missing Data and Noise — In the Case of Critical Dataset Size —

Authors: Tetsuro Saeki, Yuichi Kato, Shoutarou Mizuno

Abstract:

STRIM (Statistical Test Rule Induction Method) has been proposed as a method to effectively induct if-then rules from the decision table which is considered as a sample set obtained from the population of interest. Its usefulness has been confirmed by simulation experiments specifying rules in advance, and by comparison with conventional methods. However, scope for future development remains before STRIM can be applied to the analysis of real-world data sets. The first requirement is to determine the size of the dataset needed for inducting true rules, since finding statistically significant rules is the core of the method. The second is to examine the capacity of rule induction from datasets with contaminated attribute values created by missing data and noise, since real-world datasets usually contain such contaminated data. This paper examines the first problem theoretically, in connection with the rule length. The second problem is then examined in a simulation experiment, utilizing the critical size of dataset derived from the first step. The experimental results show that STRIM is highly robust in the analysis of datasets with contaminated attribute values, and hence is applicable to real-world data

Keywords: Rule induction, decision table, missing data, noise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1435
2731 Research on Weakly Hard Real-Time Constraints and Their Boolean Combination to Support Adaptive QoS

Authors: Xiangbin Zhu

Abstract:

Advances in computing applications in recent years have prompted the demand for more flexible scheduling models for QoS demand. Moreover, in practical applications, partly violated temporal constraints can be tolerated if the violation meets certain distribution. So we need extend the traditional Liu and Lanland model to adapt to these circumstances. There are two extensions, which are the (m, k)-firm model and Window-Constrained model. This paper researches on weakly hard real-time constraints and their combination to support QoS. The fact that a practical application can tolerate some violations of temporal constraint under certain distribution is employed to support adaptive QoS on the open real-time system. The experiment results show these approaches are effective compared to traditional scheduling algorithms.

Keywords: Weakly Hard Real-Time, Real-Time, Scheduling, Quality of Service.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1560
2730 Flow Characteristics of Pulp Liquid in Straight Ducts

Authors: M. Sumida

Abstract:

An experimental investigation was performed on pulp liquid flow in straight ducts with a square cross section. Fully developed steady flow was visualized and the fiber concentration was obtained using a light-section method developed by the author et al. The obtained results reveal quantitatively, in a definite form, the distribution of the fiber concentration. From the results and measurements of pressure loss, it is found that the flow characteristics of pulp liquid in ducts can be classified into five patterns. The relationships among the distributions of mean and fluctuation of fiber concentration, the pressure loss and the flow velocity are discussed, and then the features for each pattern are extracted. The degree of nonuniformity of the fiber concentration, which is indicated by the standard deviation of its distribution, is decreased from 0.3 to 0.05 with an increase in the velocity of the tested pulp liquid from 0.4 to 0.8%.

Keywords: Fiber Concentration, Flow Characteristic, Pulp Liquid, Straight Duct.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1557