Search results for: distribution analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30332

Search results for: distribution analysis

29762 Max-Entropy Feed-Forward Clustering Neural Network

Authors: Xiaohan Bookman, Xiaoyan Zhu

Abstract:

The outputs of non-linear feed-forward neural network are positive, which could be treated as probability when they are normalized to one. If we take Entropy-Based Principle into consideration, the outputs for each sample could be represented as the distribution of this sample for different clusters. Entropy-Based Principle is the principle with which we could estimate the unknown distribution under some limited conditions. As this paper defines two processes in Feed-Forward Neural Network, our limited condition is the abstracted features of samples which are worked out in the abstraction process. And the final outputs are the probability distribution for different clusters in the clustering process. As Entropy-Based Principle is considered into the feed-forward neural network, a clustering method is born. We have conducted some experiments on six open UCI data sets, comparing with a few baselines and applied purity as the measurement. The results illustrate that our method outperforms all the other baselines that are most popular clustering methods.

Keywords: feed-forward neural network, clustering, max-entropy principle, probabilistic models

Procedia PDF Downloads 427
29761 Investigation into the Optimum Hydraulic Loading Rate for Selected Filter Media Packed in a Continuous Upflow Filter

Authors: A. Alzeyadi, E. Loffill, R. Alkhaddar

Abstract:

Continuous upflow filters can combine the nutrient (nitrogen and phosphate) and suspended solid removal in one unit process. The contaminant removal could be achieved chemically or biologically; in both processes the filter removal efficiency depends on the interaction between the packed filter media and the influent. In this paper a residence time distribution (RTD) study was carried out to understand and compare the transfer behaviour of contaminants through a selected filter media packed in a laboratory-scale continuous up flow filter; the selected filter media are limestone and white dolomite. The experimental work was conducted by injecting a tracer (red drain dye tracer –RDD) into the filtration system and then measuring the tracer concentration at the outflow as a function of time; the tracer injection was applied at hydraulic loading rates (HLRs) (3.8 to 15.2 m h-1). The results were analysed according to the cumulative distribution function F(t) to estimate the residence time of the tracer molecules inside the filter media. The mean residence time (MRT) and variance σ2 are two moments of RTD that were calculated to compare the RTD characteristics of limestone with white dolomite. The results showed that the exit-age distribution of the tracer looks better at HLRs (3.8 to 7.6 m h-1) and (3.8 m h-1) for limestone and white dolomite respectively. At these HLRs the cumulative distribution function F(t) revealed that the residence time of the tracer inside the limestone was longer than in the white dolomite; whereas all the tracer took 8 minutes to leave the white dolomite at 3.8 m h-1. On the other hand, the same amount of the tracer took 10 minutes to leave the limestone at the same HLR. In conclusion, the determination of the optimal level of hydraulic loading rate, which achieved the better influent distribution over the filtration system, helps to identify the applicability of the material as filter media. Further work will be applied to examine the efficiency of the limestone and white dolomite for phosphate removal by pumping a phosphate solution into the filter at HLRs (3.8 to 7.6 m h-1).

Keywords: filter media, hydraulic loading rate, residence time distribution, tracer

Procedia PDF Downloads 267
29760 Using Geospatial Analysis to Reconstruct the Thunderstorm Climatology for the Washington DC Metropolitan Region

Authors: Mace Bentley, Zhuojun Duan, Tobias Gerken, Dudley Bonsal, Henry Way, Endre Szakal, Mia Pham, Hunter Donaldson, Chelsea Lang, Hayden Abbott, Leah Wilcynzski

Abstract:

Air pollution has the potential to modify the lifespan and intensity of thunderstorms and the properties of lightning. Using data mining and geovisualization, we investigate how background climate and weather conditions shape variability in urban air pollution and how this, in turn, shapes thunderstorms as measured by the intensity, distribution, and frequency of cloud-to-ground lightning. A spatiotemporal analysis was conducted in order to identify thunderstorms using high-resolution lightning detection network data. Over seven million lightning flashes were used to identify more than 196,000 thunderstorms that occurred between 2006 - 2020 in the Washington, DC Metropolitan Region. Each lightning flash in the dataset was grouped into thunderstorm events by means of a temporal and spatial clustering algorithm. Once the thunderstorm event database was constructed, hourly wind direction, wind speed, and atmospheric thermodynamic data were added to the initiation and dissipation times and locations for the 196,000 identified thunderstorms. Hourly aerosol and air quality data for the thunderstorm initiation times and locations were also incorporated into the dataset. Developing thunderstorm climatologies using a lightning tracking algorithm and lightning detection network data was found to be useful for visualizing the spatial and temporal distribution of urban augmented thunderstorms in the region.

Keywords: lightning, urbanization, thunderstorms, climatology

Procedia PDF Downloads 63
29759 A Benchtop Experiment to Study Changes in Tracer Distribution in the Subarachnoid Space

Authors: Smruti Mahapatra, Dipankar Biswas, Richard Um, Michael Meggyesy, Riccardo Serra, Noah Gorelick, Steven Marra, Amir Manbachi, Mark G. Luciano

Abstract:

Intracranial pressure (ICP) is profoundly regulated by the effects of cardiac pulsation and the volume of the incoming blood. Furthermore, these effects on ICP are incremented by the presence of a rigid skull that does not allow for changes in total volume during the cardiac cycle. These factors play a pivotal role in cerebrospinal fluid (CSF) dynamics and distribution, with consequences that are not well understood to this date and that may have a deep effect on the Central Nervous System (CNS) functioning. We designed this study with two specific aims: (a) To study how pulsatility influences local CSF flow, and (b) To study how modulating intracranial pressure affects drug distribution throughout the SAS globally. In order to achieve these aims, we built an elaborate in-vitro model of the SAS closely mimicking the dimensions and flow rates of physiological systems. To modulate intracranial pressure, we used an intracranially implanted, cardiac-gated, volume-oscillating balloon (CADENCE device). Commercially available dye was used to visualize changes in CSF flow. We first implemented two control cases, seeing how the tracer behaves in the presence of pulsations from the brain phantom and the balloon individually. After establishing the controls, we tested 2 cases, having the brain and the balloon pulsate together in sync and out of sync. We then analyzed the distribution area using image processing software. The in-sync case produced a significant increase, 5x times, in the tracer distribution area relative to the out-of-sync case. Assuming that the tracer fluid would mimic blood flow movement, a drug introduced in the SAS with such a system in place would enhance drug distribution and increase the bioavailability of therapeutic drugs to a wider spectrum of brain tissue.

Keywords: blood-brain barrier, cardiac-gated, cerebrospinal fluid, drug delivery, neurosurgery

Procedia PDF Downloads 175
29758 Towards Integrating Statistical Color Features for Human Skin Detection

Authors: Mohd Zamri Osman, Mohd Aizaini Maarof, Mohd Foad Rohani

Abstract:

Human skin detection recognized as the primary step in most of the applications such as face detection, illicit image filtering, hand recognition and video surveillance. The performance of any skin detection applications greatly relies on the two components: feature extraction and classification method. Skin color is the most vital information used for skin detection purpose. However, color feature alone sometimes could not handle images with having same color distribution with skin color. A color feature of pixel-based does not eliminate the skin-like color due to the intensity of skin and skin-like color fall under the same distribution. Hence, the statistical color analysis will be exploited such mean and standard deviation as an additional feature to increase the reliability of skin detector. In this paper, we studied the effectiveness of statistical color feature for human skin detection. Furthermore, the paper analyzed the integrated color and texture using eight classifiers with three color spaces of RGB, YCbCr, and HSV. The experimental results show that the integrating statistical feature using Random Forest classifier achieved a significant performance with an F1-score 0.969.

Keywords: color space, neural network, random forest, skin detection, statistical feature

Procedia PDF Downloads 453
29757 Atomistic Insight into the System of Trapped Oil Droplet/ Nanofluid System in Nanochannels

Authors: Yuanhao Chang, Senbo Xiao, Zhiliang Zhang, Jianying He

Abstract:

The role of nanoparticles (NPs) in enhanced oil recovery (EOR) is being increasingly emphasized. In this study, the motion of NPs and local stress distribution of tapped oil droplet/nanofluid in nanochannels are studied with coarse-grained modeling and molecular dynamic simulations. The results illustrate three motion patterns for NPs: hydrophilic NPs are more likely to adsorb on the channel and stay near the three-phase contact areas, hydrophobic NPs move inside the oil droplet as clusters and more mixed NPs are trapped at the oil-water interface. NPs in each pattern affect the flow of fluid and the interfacial thickness to various degrees. Based on the calculation of atomistic stress, the characteristic that the higher value of stress occurs at the place where NPs aggregate can be obtained. Different occurrence patterns correspond to specific local stress distribution. Significantly, in the three-phase contact area for hydrophilic NPs, the local stress distribution close to the pattern of structural disjoining pressure is observed, which proves the existence of structural disjoining pressure in molecular dynamics simulation for the first time. Our results guide the design and screen of NPs for EOR and provide a basic understanding of nanofluid applications.

Keywords: local stress distribution, nanoparticles, enhanced oil recovery, molecular dynamics simulation, trapped oil droplet, structural disjoining pressure

Procedia PDF Downloads 123
29756 Constructing the Joint Mean-Variance Regions for Univariate and Bivariate Normal Distributions: Approach Based on the Measure of Cumulative Distribution Functions

Authors: Valerii Dashuk

Abstract:

The usage of the confidence intervals in economics and econometrics is widespread. To be able to investigate a random variable more thoroughly, joint tests are applied. One of such examples is joint mean-variance test. A new approach for testing such hypotheses and constructing confidence sets is introduced. Exploring both the value of the random variable and its deviation with the help of this technique allows checking simultaneously the shift and the probability of that shift (i.e., portfolio risks). Another application is based on the normal distribution, which is fully defined by mean and variance, therefore could be tested using the introduced approach. This method is based on the difference of probability density functions. The starting point is two sets of normal distribution parameters that should be compared (whether they may be considered as identical with given significance level). Then the absolute difference in probabilities at each 'point' of the domain of these distributions is calculated. This measure is transformed to a function of cumulative distribution functions and compared to the critical values. Critical values table was designed from the simulations. The approach was compared with the other techniques for the univariate case. It differs qualitatively and quantitatively in easiness of implementation, computation speed, accuracy of the critical region (theoretical vs. real significance level). Stable results when working with outliers and non-normal distributions, as well as scaling possibilities, are also strong sides of the method. The main advantage of this approach is the possibility to extend it to infinite-dimension case, which was not possible in the most of the previous works. At the moment expansion to 2-dimensional state is done and it allows to test jointly up to 5 parameters. Therefore the derived technique is equivalent to classic tests in standard situations but gives more efficient alternatives in nonstandard problems and on big amounts of data.

Keywords: confidence set, cumulative distribution function, hypotheses testing, normal distribution, probability density function

Procedia PDF Downloads 169
29755 Distribution System Planning with Distributed Generation and Capacitor Placements

Authors: Nattachote Rugthaicharoencheep

Abstract:

This paper presents a feeder reconfiguration problem in distribution systems. The objective is to minimize the system power loss and to improve bus voltage profile. The optimization problem is subjected to system constraints consisting of load-point voltage limits, radial configuration format, no load-point interruption, and feeder capability limits. A method based on genetic algorithm, a search algorithm based on the mechanics of natural selection and natural genetics, is proposed to determine the optimal pattern of configuration. The developed methodology is demonstrated by a 33-bus radial distribution system with distributed generations and feeder capacitors. The study results show that the optimal on/off patterns of the switches can be identified to give the minimum power loss while respecting all the constraints.

Keywords: network reconfiguration, distributed generation capacitor placement, loss reduction, genetic algorithm

Procedia PDF Downloads 169
29754 The Estimation Method of Stress Distribution for Beam Structures Using the Terrestrial Laser Scanning

Authors: Sang Wook Park, Jun Su Park, Byung Kwan Oh, Yousok Kim, Hyo Seon Park

Abstract:

This study suggests the estimation method of stress distribution for the beam structures based on TLS (Terrestrial Laser Scanning). The main components of method are the creation of the lattices of raw data from TLS to satisfy the suitable condition and application of CSSI (Cubic Smoothing Spline Interpolation) for estimating stress distribution. Estimation of stress distribution for the structural member or the whole structure is one of the important factors for safety evaluation of the structure. Existing sensors which include ESG (Electric strain gauge) and LVDT (Linear Variable Differential Transformer) can be categorized as contact type sensor which should be installed on the structural members and also there are various limitations such as the need of separate space where the network cables are installed and the difficulty of access for sensor installation in real buildings. To overcome these problems inherent in the contact type sensors, TLS system of LiDAR (light detection and ranging), which can measure the displacement of a target in a long range without the influence of surrounding environment and also get the whole shape of the structure, has been applied to the field of structural health monitoring. The important characteristic of TLS measuring is a formation of point clouds which has many points including the local coordinate. Point clouds is not linear distribution but dispersed shape. Thus, to analyze point clouds, the interpolation is needed vitally. Through formation of averaged lattices and CSSI for the raw data, the method which can estimate the displacement of simple beam was developed. Also, the developed method can be extended to calculate the strain and finally applicable to estimate a stress distribution of a structural member. To verify the validity of the method, the loading test on a simple beam was conducted and TLS measured it. Through a comparison of the estimated stress and reference stress, the validity of the method is confirmed.

Keywords: structural healthcare monitoring, terrestrial laser scanning, estimation of stress distribution, coordinate transformation, cubic smoothing spline interpolation

Procedia PDF Downloads 429
29753 A Two-Pronged Truncated Deferred Sampling Plan for Log-Logistic Distribution

Authors: Braimah Joseph Odunayo, Jiju Gillariose

Abstract:

This paper is aimed at developing a sampling plan that uses information from precedent and successive lots for lot disposition with a pretention that the life-time of a particular product assumes a Log-logistic distribution. A Two-pronged Truncated Deferred Sampling Plan (TTDSP) for Log-logistic distribution is proposed when the testing is truncated at a precise time. The best possible sample sizes are obtained under a given Maximum Allowable Percent Defective (MAPD), Test Suspension Ratios (TSR), and acceptance numbers (c). A formula for calculating the operating characteristics of the proposed plan is also developed. The operating characteristics and mean-ratio values were used to measure the performance of the plan. The findings of the study show that: Log-logistic distribution has a decreasing failure rate; furthermore, as mean-life ratio increase, the failure rate reduces; the sample size increase as the acceptance number, test suspension ratios and maximum allowable percent defective increases. The study concludes that the minimum sample sizes were smaller, which makes the plan a more economical plan to adopt when cost and time of production are costly and the experiment being destructive.

Keywords: consumers risk, mean life, minimum sample size, operating characteristics, producers risk

Procedia PDF Downloads 128
29752 Structure of the Working Time of Nurses in Emergency Departments in Polish Hospitals

Authors: Jadwiga Klukow, Anna Ksykiewicz-Dorota

Abstract:

An analysis of the distribution of nurses’ working time constitutes vital information for the management in planning employment. The objective of the study was to analyze the distribution of nurses’ working time in an emergency department. The study was conducted in an emergency department of a teaching hospital in Lublin, in Southeast Poland. The catalogue of activities performed by nurses was compiled by means of continuous observation. Identified activities were classified into four groups: Direct care, indirect care, coordination of work in the department and personal activities. Distribution of nurses’ working time was determined by work sampling observation (Tippett) at random intervals. The research project was approved by the Research Ethics Committee by the Medical University of Lublin (Protocol 0254/113/2010). On average, nurses spent 31% of their working time on direct care, 47% on indirect care, 12% on coordinating work in the department and 10% on personal activities. The most frequently performed direct care tasks were diagnostic activities – 29.23% and treatment-related activities – 27.69%. The study has provided information on the complexity of performed activities and utilization of nurses’ working time. Enhancing the effectiveness of nursing actions requires working out a strategy for improved management of the time nurses spent at work. Increasing the involvement of auxiliary staff and optimizing communication processes within the team may lead to reduction of the time devoted to indirect care for the benefit of direct care.

Keywords: emergency nurses, nursing care, workload, work sampling

Procedia PDF Downloads 323
29751 Atlantic Sailfish (Istiophorus albicans) Distribution off the East Coast of Florida from 2003 to 2018 in Response to Sea Surface Temperature

Authors: Meredith M. Pratt

Abstract:

The Atlantic sailfish (Istiophorus albicans) ranges from 40°N to 40°S in the Western Atlantic Ocean and has great economic and recreational value for sport fishers. Off the eastern coast of Florida, charter boats often target this species. Stuart, Florida, bills itself as the sailfish capital of the world. Sailfish tag data from The Billfish Foundation and NOAA was used to determine the relationship between sea surface temperature (SST) and the distribution of Atlantic sailfish caught and released over a fifteen-year period (2003 to 2018). Tagging information was collected from local sports fishermen in Florida. Using the time and location of each landed sailfish, a satellite-derived SST value was obtained for each point. The purpose of this study was to determine if sea surface warming was associated with changes in sailfish distribution. On average, sailfish were caught at 26.16 ± 1.70°C (x̄ ± s.d.) over the fifteen-year period. The most sailfish catches occurred at temperatures ranging from 25.2°C to 25.5°C. Over the fifteen-year period, sailfish catches decreased at lower temperatures (~23°C and ~24°C) and at 31°C. At ~25°C and ~30°C there was no change in catch numbers of sailfish. From 26°C to 29°C, there was an increase in the number of sailfish. Based on these results, increasing ocean temperatures will have an impact on the distribution and habitat utilization of sailfish. Warming sea surface temperatures create a need for more policy and regulation to protect the Atlantic sailfish and related highly migratory billfish species.

Keywords: atlantic sailfish, Billfish, istiophorus albicans, sea surface temperature

Procedia PDF Downloads 129
29750 Petrology and Hydrothermal Alteration Mineral Distribution of Wells LA-9D and LA-10D in Aluto Geothermal Field, Ethiopia

Authors: Dereje Moges Azbite

Abstract:

Laboratory analysis of igneous rocks is performed with the help of the main oxide plots. The lithology of the two wells was identified using the main oxides obtained using the XRF method. Twenty-four (24) cutting samples with different degrees of alteration were analyzed to determine and identify the rock types by plotting these well samples on special diagrams and correlating with the regional rocks. The results for the analysis of the main oxides and trace elements of 24 samples are presented. Alteration analysis in the two well samples was conducted for 21 samples from two wells for identifying clay minerals. Bulk sample analysis indicated quartz, illite & micas, calcite, cristobalite, smectite, pyrite, epidote, alunite, chlorite, wairakite, diaspore and kaolin minerals present in both wells. Hydrothermal clay minerals such as illite, chlorite, smectite and kaoline minerals were identified in both wells by X-ray diffraction.

Keywords: auto geothermal field, igneous rocks, major oxides, tracer elements, XRF, XRD, alteration minerals

Procedia PDF Downloads 127
29749 Complex Network Analysis of Seismicity and Applications to Short-Term Earthquake Forecasting

Authors: Kahlil Fredrick Cui, Marissa Pastor

Abstract:

Earthquakes are complex phenomena, exhibiting complex correlations in space, time, and magnitude. Recently, the concept of complex networks has been used to shed light on the statistical and dynamical characteristics of regional seismicity. In this work, we study the relationships and interactions of seismic regions in Chile, Japan, and the Philippines through weighted and directed complex network analysis. Geographical areas are digitized into cells of fixed dimensions which in turn become the nodes of the network when an earthquake has occurred therein. Nodes are linked if a correlation exists between them as determined and measured by a correlation metric. The networks are found to be scale-free, exhibiting power-law behavior in the distributions of their different centrality measures: the in- and out-degree and the in- and out-strength. The evidence is also found of preferential interaction between seismically active regions through their degree-degree correlations suggesting that seismicity is dictated by the activity of a few active regions. The importance of a seismic region to the overall seismicity is measured using a generalized centrality metric taken to be an indicator of its activity or passivity. The spatial distribution of earthquake activity indicates the areas where strong earthquakes have occurred in the past while the passivity distribution points toward the likely locations an earthquake would occur whenever another one happens elsewhere. Finally, we propose a method that would project the location of the next possible earthquake using the generalized centralities coupled with correlations calculated between the latest earthquakes and a geographical point in the future.

Keywords: complex networks, correlations, earthquake, hazard assessment

Procedia PDF Downloads 200
29748 Evaluation of Reliability Flood Control System Based on Uncertainty of Flood Discharge, Case Study Wulan River, Central Java, Indonesia

Authors: Anik Sarminingsih, Krishna V. Pradana

Abstract:

The failure of flood control system can be caused by various factors, such as not considering the uncertainty of designed flood causing the capacity of the flood control system is exceeded. The presence of the uncertainty factor is recognized as a serious issue in hydrological studies. Uncertainty in hydrological analysis is influenced by many factors, starting from reading water elevation data, rainfall data, selection of method of analysis, etc. In hydrological modeling selection of models and parameters corresponding to the watershed conditions should be evaluated by the hydraulic model in the river as a drainage channel. River cross-section capacity is the first defense in knowing the reliability of the flood control system. Reliability of river capacity describes the potential magnitude of flood risk. Case study in this research is Wulan River in Central Java. This river occurring flood almost every year despite some efforts to control floods such as levee, floodway and diversion. The flood-affected areas include several sub-districts, mainly in Kabupaten Kudus and Kabupaten Demak. First step is analyze the frequency of discharge observation from Klambu weir which have time series data from 1951-2013. Frequency analysis is performed using several distribution frequency models such as Gumbel distribution, Normal, Normal Log, Pearson Type III and Log Pearson. The result of the model based on standard deviation overlaps, so the maximum flood discharge from the lower return periods may be worth more than the average discharge for larger return periods. The next step is to perform a hydraulic analysis to evaluate the reliability of river capacity based on the flood discharge resulted from several methods. The selection of the design flood discharge of flood control system is the result of the method closest to bankfull capacity of the river.

Keywords: design flood, hydrological model, reliability, uncertainty, Wulan river

Procedia PDF Downloads 289
29747 A CFD Analysis of Hydraulic Characteristics of the Rod Bundles in the BREST-OD-300 Wire-Spaced Fuel Assemblies

Authors: Dmitry V. Fomichev, Vladimir V. Solonin

Abstract:

This paper presents the findings from a numerical simulation of the flow in 37-rod fuel assembly models spaced by a double-wire trapezoidal wrapping as applied to the BREST-OD-300 experimental nuclear reactor. Data on a high static pressure distribution within the models, and equations for determining the fuel bundle flow friction factors have been obtained. Recommendations are provided on using the closing turbulence models available in the ANSYS Fluent. A comparative analysis has been performed against the existing empirical equations for determining the flow friction factors. The calculated and experimental data fit has been shown. An analysis into the experimental data and results of the numerical simulation of the BREST-OD-300 fuel rod assembly hydrodynamic performance are presented.

Keywords: BREST-OD-300, ware-spaces, fuel assembly, computation fluid dynamics

Procedia PDF Downloads 371
29746 Model of Transhipment and Routing Applied to the Cargo Sector in Small and Medium Enterprises of Bogotá, Colombia

Authors: Oscar Javier Herrera Ochoa, Ivan Dario Romero Fonseca

Abstract:

This paper presents a design of a model for planning the distribution logistics operation. The significance of this work relies on the applicability of this fact to the analysis of small and medium enterprises (SMEs) of dry freight in Bogotá. Two stages constitute this implementation: the first one is the place where optimal planning is achieved through a hybrid model developed with mixed integer programming, which considers the transhipment operation based on a combined load allocation model as a classic transshipment model; the second one is the specific routing of that operation through the heuristics of Clark and Wright. As a result, an integral model is obtained to carry out the step by step planning of the distribution of dry freight for SMEs in Bogotá. In this manner, optimum assignments are established by utilizing transshipment centers with that purpose of determining the specific routing based on the shortest distance traveled.

Keywords: transshipment model, mixed integer programming, saving algorithm, dry freight transportation

Procedia PDF Downloads 214
29745 Functioning of Public Distribution System and Calories Intake in the State of Maharashtra

Authors: Balasaheb Bansode, L. Ladusingh

Abstract:

The public distribution system is an important component of food security. It is a massive welfare program undertaken by Government of India and implemented by state government since India being a federal state; for achieving multiple objectives like eliminating hunger, reduction in malnutrition and making food consumption affordable. This program reaches at the community level through the various agencies of the government. The paper focuses on the accessibility of PDS at household level and how the present policy framework results in exclusion and inclusion errors. It tries to explore the sanctioned food grain quantity received by differentiated ration cards according to income criterion at household level, and also it has highlighted on the type of corruption in food distribution that is generated by the PDS system. The data used is of secondary nature from NSSO 68 round conducted in 2012. Bivariate and multivariate techniques have been used to understand the working and consumption of food for this paper.

Keywords: calories intake, entitle food quantity, poverty aliviation through PDS, target error

Procedia PDF Downloads 323
29744 Spectroscopic Autoradiography of Alpha Particles on Geologic Samples at the Thin Section Scale Using a Parallel Ionization Multiplier Gaseous Detector

Authors: Hugo Lefeuvre, Jerôme Donnard, Michael Descostes, Sophie Billon, Samuel Duval, Tugdual Oger, Herve Toubon, Paul Sardini

Abstract:

Spectroscopic autoradiography is a method of interest for geological sample analysis. Indeed, researchers may face different issues such as radioelement identification and quantification in the field of environmental studies. Imaging gaseous ionization detectors find their place in geosciences for conducting specific measurements of radioactivity to improve the monitoring of natural processes using naturally-occurring radioactive tracers, but also for the nuclear industry linked to the mining sector. In geological samples, the location and identification of the radioactive-bearing minerals at the thin-section scale remains a major challenge as the detection limit of the usual elementary microprobe techniques is far higher than the concentration of most of the natural radioactive decay products. The spatial distribution of each decay product in the case of uranium in a geomaterial is interesting for relating radionuclides concentration to the mineralogy. The present study aims to provide spectroscopic autoradiography analysis method for measuring the initial energy of alpha particles with a parallel ionization multiplier gaseous detector. The analysis method has been developed thanks to Geant4 modelling of the detector. The track of alpha particles recorded in the gas detector allow the simultaneous measurement of the initial point of emission and the reconstruction of the initial particle energy by a selection based on the linear energy distribution. This spectroscopic autoradiography method was successfully used to reproduce the alpha spectra from a 238U decay chain on a geological sample at the thin-section scale. The characteristics of this measurement are an energy spectrum resolution of 17.2% (FWHM) at 4647 keV and a spatial resolution of at least 50 µm. Even if the efficiency of energy spectrum reconstruction is low (4.4%) compared to the efficiency of a simple autoradiograph (50%), this novel measurement approach offers the opportunity to select areas on an autoradiograph to perform an energy spectrum analysis within that area. This opens up possibilities for the detailed analysis of heterogeneous geological samples containing natural alpha emitters such as uranium-238 and radium-226. This measurement will allow the study of the spatial distribution of uranium and its descendants in geo-materials by coupling scanning electron microscope characterizations. The direct application of this dual modality (energy-position) of analysis will be the subject of future developments. The measurement of the radioactive equilibrium state of heterogeneous geological structures, and the quantitative mapping of 226Ra radioactivity are now being actively studied.

Keywords: alpha spectroscopy, digital autoradiography, mining activities, natural decay products

Procedia PDF Downloads 141
29743 Nonparametric Copula Approximations

Authors: Serge Provost, Yishan Zang

Abstract:

Copulas are currently utilized in finance, reliability theory, machine learning, signal processing, geodesy, hydrology and biostatistics, among several other fields of scientific investigation. It follows from Sklar's theorem that the joint distribution function of a multidimensional random vector can be expressed in terms of its associated copula and marginals. Since marginal distributions can easily be determined by making use of a variety of techniques, we address the problem of securing the distribution of the copula. This will be done by using several approaches. For example, we will obtain bivariate least-squares approximations of the empirical copulas, modify the kernel density estimation technique and propose a criterion for selecting appropriate bandwidths, differentiate linearized empirical copulas, secure Bernstein polynomial approximations of suitable degrees, and apply a corollary to Sklar's result. Illustrative examples involving actual observations will be presented. The proposed methodologies will as well be applied to a sample generated from a known copula distribution in order to validate their effectiveness.

Keywords: copulas, Bernstein polynomial approximation, least-squares polynomial approximation, kernel density estimation, density approximation

Procedia PDF Downloads 61
29742 A Framework for Supply Chain Efficiency Evaluation of Mass Customized Automobiles

Authors: Arshia Khan, Hans-Dietrich Haasis

Abstract:

Different tools of the supply chain should be managed very efficiently in mass customization. In the automobile industry, there are different strategies to manage these tools. We need to investigate which strategies among the different ones are successful and which are not. There is lack in literature regarding such analysis. Keeping this in view, the purpose of this paper is to construct a framework and model which can help to analyze the supply chain of mass customized automobiles quantitatively for future studies. Furthermore, we will also consider that which type of data can be used for the suggested model and where it can be taken from. Such framework can help to bring insight for future analysis.

Keywords: mass customization, supply chain, inventory, distribution, automobile industry

Procedia PDF Downloads 360
29741 Numerical Investigation of Fluid Flow and Temperature Distribution on Power Transformer Windings Using Open Foam

Authors: Saeed Khandan Siar, Stefan Tenbohlen, Christian Breuer, Raphael Lebreton

Abstract:

The goal of this article is to investigate the detailed temperature distribution and the fluid flow of an oil cooled winding of a power transformer by means of computational fluid dynamics (CFD). The experimental setup consists of three passes of a zig-zag cooled disc type winding, in which losses are modeled by heating cartridges in each winding segment. A precise temperature sensor measures the temperature of each turn. The laboratory setup allows the exact control of the boundary conditions, e.g. the oil flow rate and the inlet temperature. Furthermore, a simulation model is solved using the open source computational fluid dynamics solver OpenFOAM and validated with the experimental results. The model utilizes the laminar and turbulent flow for the different mass flow rate of the oil. The good agreement of the simulation results with experimental measurements validates the model.

Keywords: CFD, conjugated heat transfer, power transformers, temperature distribution

Procedia PDF Downloads 412
29740 Intermediate-Term Impact of Taiwan High-Speed Rail (HSR) and Land Use on Spatial Patterns of HSR Travel

Authors: Tsai Yu-hsin, Chung Yi-Hsin

Abstract:

The employment of an HSR system, resulting in elevation in the inter-city/-region accessibility, is likely to promote spatial interaction between places in the HSR and extended territory. The inter-city/-region travel via HSR could be, among others, affected by the land use, transportation, and location of the HSR station at both trip origin and destination ends. However, relatively few insights have been shed on these impacts and spatial patterns of the HSR travel. The research purposes, as phase one of a series of HSR related research, of this study are threefold: to analyze the general spatial patterns of HSR trips, such as the spatial distribution of trip origins and destinations; to analyze if specific land use, transportation characteristics, and trip characteristics affect HSR trips in terms of the use of HSR, the distribution of trip origins and destinations, and; to analyze the socio-economic characteristics of HSR travelers. With the Taiwan HSR starting operation in 2007, this study emphasizes on the intermediate-term impact of HSR, which is made possible with the population and housing census and industry and commercial census data and a station area intercept survey conducted in the summer 2014. The analysis will be conducted at the city, inter-city, and inter-region spatial levels, as necessary and required. The analysis tools include descriptive statistics and multivariate analysis with the assistance of SPSS, HLM and ArcGIS. The findings, on the one hand, can provide policy implications for associated land use, transportation plan and the site selection of HSR station. On the other hand, on the travel the findings are expected to provide insights that can help explain how land use and real estate values could be affected by HSR in following phases of this series of research.

Keywords: high speed rail, land use, travel, spatial pattern

Procedia PDF Downloads 453
29739 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 393
29738 Loss Allocation in Radial Distribution Networks for Loads of Composite Types

Authors: Sumit Banerjee, Chandan Kumar Chanda

Abstract:

The paper presents allocation of active power losses and energy losses to consumers connected to radial distribution networks in a deregulated environment for loads of composite types. A detailed comparison among four algorithms, namely quadratic loss allocation, proportional loss allocation, pro rata loss allocation and exact loss allocation methods are presented. Quadratic and proportional loss allocations are based on identifying the active and reactive components of current in each branch and the losses are allocated to each consumer, pro rata loss allocation method is based on the load demand of each consumer and exact loss allocation method is based on the actual contribution of active power loss by each consumer. The effectiveness of the proposed comparison among four algorithms for composite load is demonstrated through an example.

Keywords: composite type, deregulation, loss allocation, radial distribution networks

Procedia PDF Downloads 278
29737 Comparative Analysis of Hybrid Dynamic Stabilization and Fusion for Degenerative Disease of the Lumbosacral Spine: Finite Element Analysis

Authors: Mohamed Bendoukha, Mustapha Mosbah

Abstract:

The Radiographic apparent assumed that the asymptomatic adjacent segment disease ASD is common after lumbar fusion, but this does not correlate with the functional outcomes while compensatory increased motion and stresses at the adjacent level of fusion is well-known to be associated to ASD. Newly developed, the hybrid stabilization are allocated to substituted for mostly the superior level of the fusion in an attempt to reduce the number of fusion levels and likelihood of degeneration process at the adjacent levels during the fusion with pedicle screws. Nevertheless, its biomechanical efficiencies still remain unknown and complications associated with failure of constructs such screw loosening and toggling should be elucidated In the current study, a finite element (FE) study was performed using a validated L2/S1 model subjected to a moment of 7.5 Nm and follower load of 400 N to assess the biomedical behavior of hybrid constructs based on dynamic topping off, semi rigid fusion. The residual range of motion (ROM), stress distribution at the fused and adjacent levels, stress distribution at the disc and the cage-endplate interface with respect to changes of bone quality were investigated. The hybrid instrumentation was associated with a reduction in compressive stresses compared to the fusion construct in the adjacent-level disc and showed high substantial axial force in the implant while fusion instrumentation increased the motion for both flexion and extension.

Keywords: intervertebral disc, lumbar spine, degenerative nuclesion, L4-L5, range of motion finite element model, hyperelasticy

Procedia PDF Downloads 175
29736 Numerical Simulation of Flow and Heat Transfer Characteristics with Various Working Conditions inside a Reactor of Wet Scrubber

Authors: Jonghyuk Yoon, Hyoungwoon Song, Youngbae Kim, Eunju Kim

Abstract:

Recently, with the rapid growth of semiconductor industry, lots of interests have been focused on after treatment system that remove the polluted gas produced from semiconductor manufacturing process, and a wet scrubber is the one of the widely used system. When it comes to mechanism of removing the gas, the polluted gas is removed firstly by chemical reaction in a reactor part. After that, the polluted gas stream is brought into contact with the scrubbing liquid, by spraying it with the liquid. Effective design of the reactor part inside the wet scrubber is highly important since removal performance of the polluted gas in the reactor plays an important role in overall performance and stability. In the present study, a CFD (Computational Fluid Dynamics) analysis was performed to figure out the thermal and flow characteristics inside unit a reactor of wet scrubber. In order to verify the numerical result, temperature distribution of the numerical result at various monitoring points was compared to the experimental result. The average error rates (12~15%) between them was shown and the numerical result of temperature distribution was in good agreement with the experimental data. By using validated numerical method, the effect of the reactor geometry on heat transfer rate was also taken into consideration. Uniformity of temperature distribution was improved about 15%. Overall, the result of present study could be useful information to identify the fluid behavior and thermal performance for various scrubber systems. This project is supported by the ‘R&D Center for the reduction of Non-CO₂ Greenhouse gases (RE201706054)’ funded by the Korea Ministry of Environment (MOE) as the Global Top Environment R&D Program.

Keywords: semiconductor, polluted gas, CFD (Computational Fluid Dynamics), wet scrubber, reactor

Procedia PDF Downloads 131
29735 Multi-Subpopulation Genetic Algorithm with Estimation of Distribution Algorithm for Textile Batch Dyeing Scheduling Problem

Authors: Nhat-To Huynh, Chen-Fu Chien

Abstract:

Textile batch dyeing scheduling problem is complicated which includes batch formation, batch assignment on machines, batch sequencing with sequence-dependent setup time. Most manufacturers schedule their orders manually that are time consuming and inefficient. More power methods are needed to improve the solution. Motivated by the real needs, this study aims to propose approaches in which genetic algorithm is developed with multi-subpopulation and hybridised with estimation of distribution algorithm to solve the constructed problem for minimising the makespan. A heuristic algorithm is designed and embedded into the proposed algorithms to improve the ability to get out of the local optima. In addition, an empirical study is conducted in a textile company in Taiwan to validate the proposed approaches. The results have showed that proposed approaches are more efficient than simulated annealing algorithm.

Keywords: estimation of distribution algorithm, genetic algorithm, multi-subpopulation, scheduling, textile dyeing

Procedia PDF Downloads 292
29734 Comparison of Heuristic Methods for Solving Traveling Salesman Problem

Authors: Regita P. Permata, Ulfa S. Nuraini

Abstract:

Traveling Salesman Problem (TSP) is the most studied problem in combinatorial optimization. In simple language, TSP can be described as a problem of finding a minimum distance tour to a city, starting and ending in the same city, and exactly visiting another city. In product distribution, companies often get problems in determining the minimum distance that affects the time allocation. In this research, we aim to apply TSP heuristic methods to simulate nodes as city coordinates in product distribution. The heuristics used are sub tour reversal, nearest neighbor, farthest insertion, cheapest insertion, nearest insertion, and arbitrary insertion. We have done simulation nodes using Euclidean distances to compare the number of cities and processing time, thus we get optimum heuristic method. The results show that the optimum heuristic methods are farthest insertion and nearest insertion. These two methods can be recommended to solve product distribution problems in certain companies.

Keywords: Euclidean, heuristics, simulation, TSP

Procedia PDF Downloads 121
29733 The Analysis of Spatial Development: Malekan City

Authors: Rahim Sarvar, Bahram Azadbakht, Samira Safaee

Abstract:

The leading goal of all planning is to attain sustainable development, regional balance, suitable distribution of activities, and maximum use of environmental capabilities in the process of development of regions. Intensive concentration of population and activities in one or some limited geographical locality is of main characteristics of most developing countries, especially Iran. Not considering the long-term programs and relying on temporary and superficial plans by people in charge of decision-making to attain their own objectives causes obstacles, resulting in unbalance development. The basic reason for these problems is to establish the development planning while economic aspects are merely considered and any attentions are not paid to social and regional feedbacks, what have been ending up to social and economic inequality, unbalanced distribution of development among the regions as well. In addition to study of special planning and structure of the county of Malekan, this research tries to achieve some other aims, i.e. recognition and introduction of approaches in order to utilize resources optimally, to distribute the population, activities, and facilities in optimum fashion, and to investigate and identify the spatial development potentials of the County. Based on documentary, descriptive, analytical, and field studies, this research employs maps to analyze the data, investigates the variables, and applies SPSS, Auto CAD, and Arc View software. The results show that the natural factors have a significant influence on spatial layout of settlements; distribution of facilities and functions are not equal among the rural districts of the county; and there is a spatial equivalence in the region area between population and number of settlements.

Keywords: development, entropy index, Malekan City, planning, regional equilibrium

Procedia PDF Downloads 431