Search results for: Maximum Likelihood estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2648

Search results for: Maximum Likelihood estimation

1388 The Competitive Newsvendor Game with Overestimated Demand

Authors: Chengli Liu, C. K. M. Lee

Abstract:

The tradition competitive newsvendor game assumes decision makers are rational. However, there are behavioral biases when people make decisions, such as loss aversion, mental accounting and overconfidence. Overestimation of a subject’s own performance is one type of overconfidence. The objective of this research is to analyze the impact of the overestimated demand in the newsvendor competitive game with two players. This study builds a competitive newsvendor game model where newsvendors have private information of their demands, which is overestimated. At the same time, demands of each newsvendor forecasted by a third party institution are available. This research shows that the overestimation leads to demand steal effect, which reduces the competitor’s order quantity. However, the overall supply of the product increases due to overestimation. This study illustrates the boundary condition for the overestimated newsvendor to have the equilibrium order drop due to the demand steal effect from the other newsvendor. A newsvendor who has higher critical fractile will see its equilibrium order decrease with the drop of estimation level from the other newsvendor.

Keywords: Bias, competitive newsvendor, Nash equilibrium, overestimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1474
1387 Generation of Highly Ordered Porous Antimony-Doped Tin Oxide Film by A Simple Coating Method with Colloidal Template

Authors: Asep Bayu Dani Nandiyanto, Asep Suhendi, Yutaka Kisakibaru, Takashi Ogi, Kikuo Okuyama

Abstract:

An ordered porous antimony-doped tin oxide (ATO) film was successfully prepared using a simple coating process with colloidal templates. The facile production was effective when a combination of 16-nm ATO (as a model of an inorganic nanoparticle) and polystyrene (PS) spheres (as a model of the template) weresimply coated to produce a composite ATO/PS film. Heat treatment was then used to remove the PS and produce the porous film. The porous film with a spherical pore shape and a highly ordered porous structure could be obtained. A potential way for the control of pore size could be also achieved by changing initial template size. The theoretical explanation and mechanism of porous formation were also added, which would be important for the scaling-up prediction and estimation.

Keywords: Porous structure film; ATO particle; Ultra-low refractive index; vertical drop method; Low-density material;

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1570
1386 Determining the Best Fitting Distributions for Minimum Flows of Streams in Gediz Basin

Authors: Naci Büyükkaracığan

Abstract:

Today, the need for water sources is swiftly increasing due to population growth. At the same time, it is known that some regions will face with shortage of water and drought because of the global warming and climate change. In this context, evaluation and analysis of hydrological data such as the observed trends, drought and flood prediction of short term flow has great deal of importance. The most accurate selection probability distribution is important to describe the low flow statistics for the studies related to drought analysis. As in many basins In Turkey, Gediz River basin will be affected enough by the drought and will decrease the amount of used water. The aim of this study is to derive appropriate probability distributions for frequency analysis of annual minimum flows at 6 gauging stations of the Gediz Basin. After applying 10 different probability distributions, six different parameter estimation methods and 3 fitness test, the Pearson 3 distribution and general extreme values distributions were found to give optimal results.

Keywords: Gediz Basin, goodness-of-fit tests, Minimum flows, probability distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2505
1385 Effect of Gas-Diffusion Oxynitriding on Microstructure and Hardness of Ti-6Al-4V Alloys

Authors: Dong Bok Lee, Min Jung Kim

Abstract:

The commercially available titanium alloy, Ti-6Al-4V, was oxynitrided in the deoxygenated nitrogen gas at high temperatures followed by cooling in oxygen-containing nitrogen in order to analyze the influence of oxynitriding parameters on the phase modification, hardness, and the microstructural evolution of the oxynitrided coating. The surface microhardness of the oxynitrided alloy increased due to the strengthening effect of the formed titanium oxynitrides, TiNxOy. The maximum microhardness was obtained, when TiNxOy had near equiatomic composition of nitrogen and oxygen. It could be attained under the optimum oxygen partial pressure and temperature-time condition.

Keywords: Oxynitriding, surface microhardness, titanium alloys, Ti-6Al-4V.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1153
1384 Analysis of a TBM Tunneling Effect on Surface Subsidence: A Case Study from Tehran, Iran

Authors: A. R. Salimi, M. Esmaeili, B. Salehi

Abstract:

The development and extension of large cities induced a need for shallow tunnel in soft ground of building areas. Estimation of ground settlement caused by the tunnel excavation is important engineering point. In this paper, prediction of surface subsidence caused by tunneling in one section of seventh line of Tehran subway is considered. On the basis of studied geotechnical conditions of the region, tunnel with the length of 26.9km has been excavated applying a mechanized method using an EPB-TBM with a diameter of 9.14m. In this regard, settlement is estimated utilizing both analytical and numerical finite element method. The numerical method shows that the value of settlement in this section is 5cm. Besides, the analytical consequences (Bobet and Loganathan-Polous) are 5.29 and 12.36cm, respectively. According to results of this study, due tosaturation of this section, there are good agreement between Bobet and numerical methods. Therefore, tunneling processes in this section needs a special consolidation measurement and support system before the passage of tunnel boring machine.

Keywords: TBM, Subsidence, Numerical Method, Analytical Method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5409
1383 An Automated Stock Investment System Using Machine Learning Techniques: An Application in Australia

Authors: Carol Anne Hargreaves

Abstract:

A key issue in stock investment is how to select representative features for stock selection. The objective of this paper is to firstly determine whether an automated stock investment system, using machine learning techniques, may be used to identify a portfolio of growth stocks that are highly likely to provide returns better than the stock market index. The second objective is to identify the technical features that best characterize whether a stock’s price is likely to go up and to identify the most important factors and their contribution to predicting the likelihood of the stock price going up. Unsupervised machine learning techniques, such as cluster analysis, were applied to the stock data to identify a cluster of stocks that was likely to go up in price – portfolio 1. Next, the principal component analysis technique was used to select stocks that were rated high on component one and component two – portfolio 2. Thirdly, a supervised machine learning technique, the logistic regression method, was used to select stocks with a high probability of their price going up – portfolio 3. The predictive models were validated with metrics such as, sensitivity (recall), specificity and overall accuracy for all models. All accuracy measures were above 70%. All portfolios outperformed the market by more than eight times. The top three stocks were selected for each of the three stock portfolios and traded in the market for one month. After one month the return for each stock portfolio was computed and compared with the stock market index returns. The returns for all three stock portfolios was 23.87% for the principal component analysis stock portfolio, 11.65% for the logistic regression portfolio and 8.88% for the K-means cluster portfolio while the stock market performance was 0.38%. This study confirms that an automated stock investment system using machine learning techniques can identify top performing stock portfolios that outperform the stock market.

Keywords: Machine learning, stock market trading, logistic principal component analysis, automated stock investment system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1096
1382 Knowledge Management Model for Managing Knowledge among Related Organizations

Authors: Mahboubeh Molaei

Abstract:

Transferring information developed by other peoples is an ordinary event that happens during daily conversations, for example when employees sea each other in the organization, or when they are having lunch together, or attending a meeting, they use to talk about their experience, and discuss about their current projects, and talk about their successes over some specific problems. Despite the potential value of leveraging organizational memory and expertise by using OMS and ER, still small organizations haven-t been able to capitalize on its promised value. Each organization has its internal knowledge management system, in some of organizations the system face the lack of expert people to save their experience in the repository and in another hand on some other organizations there are lots of expert people but the organization doesn-t have the maximum use of their knowledge.

Keywords: Knowledge, knowledge management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1467
1381 Multiple Moving Talker Tracking by Integration of Two Successive Algorithms

Authors: Kenji Suyama, Masahiro Oshida, Noboru Owada

Abstract:

In this paper, an estimation accuracy of multiple moving talker tracking using a microphone array is improved. The tracking can be achieved by the adaptive method in which two algorithms are integrated, namely, the PAST (Projection Approximation Subspace Tracking) algorithm and the IPLS (Interior Point Least Square) algorithm. When either talker begins to speak again after a silent period, an appropriate feasible region for an evaluation function of the IPLS algorithm might not be set. Then, the tracking fails due to the incorrect updating. Therefore, if an increment of the number of active talkers is detected, the feasible region must be reset. Then, a low cost realization is required for the high speed tracking and a high accuracy realization is desired for the precise tracking. In this paper, the directions roughly estimated using the delayed-sum-array method are used for the resetting. Several results of experiments performed in an actual room environment show the effectiveness of the proposed method.

Keywords: moving talkers tracking, microphone array, signal subspace

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1336
1380 Applications of Entropy Measures in Field of Queuing Theory

Authors: R.K.Tuli

Abstract:

In the present communication, we have studied different variations in the entropy measures in the different states of queueing processes. In case of steady state queuing process, it has been shown that as the arrival rate increases, the uncertainty increases whereas in the case of non-steady birth-death process, it is shown that the uncertainty varies differently. In this pattern, it first increases and attains its maximum value and then with the passage of time, it decreases and attains its minimum value.

Keywords: Entropy, Birth-death process, M/G/1 system, G/M/1system, Steady state, Non-steady state

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1600
1379 Transformer Top-Oil Temperature Modeling and Simulation

Authors: T. C. B. N. Assunção, J. L. Silvino, P. Resende

Abstract:

The winding hot-spot temperature is one of the most critical parameters that affect the useful life of the power transformers. The winding hot-spot temperature can be calculated as function of the top-oil temperature that can estimated by using the ambient temperature and transformer loading measured data. This paper proposes the estimation of the top-oil temperature by using a method based on Least Squares Support Vector Machines approach. The estimated top-oil temperature is compared with measured data of a power transformer in operation. The results are also compared with methods based on the IEEE Standard C57.91-1995/2000 and Artificial Neural Networks. It is shown that the Least Squares Support Vector Machines approach presents better performance than the methods based in the IEEE Standard C57.91-1995/2000 and artificial neural networks.

Keywords: Artificial Neural Networks, Hot-spot Temperature, Least Squares Support Vector, Top-oil Temperature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2490
1378 Simulation Tools for Fixed Point DSP Algorithms and Architectures

Authors: K. B. Cullen, G. C. M. Silvestre, N. J. Hurley

Abstract:

This paper presents software tools that convert the C/Cµ floating point source code for a DSP algorithm into a fixedpoint simulation model that can be used to evaluate the numericalperformance of the algorithm on several different fixed pointplatforms including microprocessors, DSPs and FPGAs. The tools use a novel system for maintaining binary point informationso that the conversion from floating point to fixed point isautomated and the resulting fixed point algorithm achieves maximum possible precision. A configurable architecture is used during the simulation phase so that the algorithm can produce a bit-exact output for several different target devices.

Keywords: DSP devices, DSP algorithm, simulation model, software

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2550
1377 Investigation on Mesh Sensitivity of a Transient Model for Nozzle Clogging

Authors: H. Barati, M. Wu, A. Kharicha, A. Ludwig

Abstract:

A transient model for nozzle clogging has been developed and successfully validated against a laboratory experiment. Key steps of clogging are considered: transport of particles by turbulent flow towards the nozzle wall; interactions between fluid flow and nozzle wall, and the adhesion of the particle on the wall; the growth of the clog layer and its interaction with the flow. The current paper is to investigate the mesh (size and type) sensitivity of the model in both two and three dimensions. It is found that the algorithm for clog growth alone excluding the flow effect is insensitive to the mesh type and size, but the calculation including flow becomes sensitive to the mesh quality. The use of 2D meshes leads to overestimation of the clog growth because the 3D nature of flow in the boundary layer cannot be properly solved by 2D calculation. 3D simulation with tetrahedron mesh can also lead to an error estimation of the clog growth. A mesh-independent result can be achieved with hexahedral mesh, or at least with triangular prism (inflation layer) for near-wall regions.

Keywords: Clogging, nozzle, numerical model, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 840
1376 Modified Hybrid Genetic Algorithm-Based Artificial Neural Network Application on Wall Shear Stress Prediction

Authors: Zohreh Sheikh Khozani, Wan Hanna Melini Wan Mohtar, Mojtaba Porhemmat

Abstract:

Prediction of wall shear stress in a rectangular channel, with non-homogeneous roughness distribution, was studied. Estimation of shear stress is an important subject in hydraulic engineering, since it affects the flow structure directly. In this study, the Genetic Algorithm Artificial (GAA) neural network is introduced as a hybrid methodology of the Artificial Neural Network (ANN) and modified Genetic Algorithm (GA) combination. This GAA method was employed to predict the wall shear stress. Various input combinations and transfer functions were considered to find the most appropriate GAA model. The results show that the proposed GAA method could predict the wall shear stress of open channels with high accuracy, by Root Mean Square Error (RMSE) of 0.064 in the test dataset. Thus, using GAA provides an accurate and practical simple-to-use equation.

Keywords: Artificial neural network, genetic algorithm, genetic programming, rectangular channel, shear stress.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 668
1375 Removal of Methylene Blue from Aqueous Solution by Using Gypsum as a Low Cost Adsorbent

Authors: Muhammad A.Rauf, I.Shehadeh, Amal Ahmed, Ahmed Al-Zamly

Abstract:

Removal of Methylene Blue (MB) from aqueous solution by adsorbing it on Gypsum was investigated by batch method. The studies were conducted at 25°C and included the effects of pH and initial concentration of Methylene Blue. The adsorption data was analyzed by using the Langmuir, Freundlich and Tempkin isotherm models. The maximum monolayer adsorption capacity was found to be 36 mg of the dye per gram of gypsum. The data were also analyzed in terms of their kinetic behavior and was found to obey the pseudo second order equation.

Keywords: Adsorption, Dye, Gypsum, Kinetics, Methylene Blue.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2677
1374 Long-Range Dependence of Financial Time Series Data

Authors: Chatchai Pesee

Abstract:

This paper examines long-range dependence or longmemory of financial time series on the exchange rate data by the fractional Brownian motion (fBm). The principle of spectral density function in Section 2 is used to find the range of Hurst parameter (H) of the fBm. If 0< H <1/2, then it has a short-range dependence (SRD). It simulates long-memory or long-range dependence (LRD) if 1/2< H <1. The curve of exchange rate data is fBm because of the specific appearance of the Hurst parameter (H). Furthermore, some of the definitions of the fBm, long-range dependence and selfsimilarity are reviewed in Section II as well. Our results indicate that there exists a long-memory or a long-range dependence (LRD) for the exchange rate data in section III. Long-range dependence of the exchange rate data and estimation of the Hurst parameter (H) are discussed in Section IV, while a conclusion is discussed in Section V.

Keywords: Fractional Brownian motion, long-rangedependence, memory, short-range dependence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1883
1373 Precombining Adaptive LMMSE Detection for DS-CDMA Systems in Time Varying Channels: Non Blind and Blind Approaches

Authors: M. D. Kokate, T. R. Sontakke, P. W. Wani

Abstract:

This paper deals with an adaptive multiuser detector for direct sequence code division multiple-access (DS-CDMA) systems. A modified receiver, precombinig LMMSE is considered under time varying channel environment. Detector updating is performed with two criterions, mean square estimation (MSE) and MOE optimization technique. The adaptive implementation issues of these two schemes are quite different. MSE criterion updates the filter weights by minimizing error between data vector and adaptive vector. MOE criterion together with canonical representation of the detector results in a constrained optimization problem. Even though the canonical representation is very complicated under time varying channels, it is analyzed with assumption of average power profile of multipath replicas of user of interest. The performance of both schemes is studied for practical SNR conditions. Results show that for poor SNR, MSE precombining LMMSE is better than the blind precombining LMMSE but for greater SNR, MOE scheme outperforms with better result.

Keywords: LMMSE, MOE, MUD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1494
1372 An Attempt to Predict the Performances of a Rocket Thrust Chamber

Authors: A. Benarous, D. Karmed, R. Haoui, A. Liazid

Abstract:

The process for predicting the ballistic properties of a liquid rocket engine is based on the quantitative estimation of idealized performance deviations. In this aim, an equilibrium chemistry procedure is firstly developed and implemented in a Fortran routine. The thermodynamic formulation allows for the calculation of the theoretical performances of a rocket thrust chamber. In a second step, a computational fluid dynamic analysis of the turbulent reactive flow within the chamber is performed using a finite volume approach. The obtained values for the “quasi-real" performances account for both turbulent mixing and chemistryturbulence coupling. In the present work, emphasis is made on the combustion efficiency performance for which deviation is mainly due to radial gradients of static temperature and mixture ratio. Numerical values of the characteristic velocity are successfully compared with results from an industry-used code. The results are also confronted with the experimental data of a laboratory-scale rocket engine.

Keywords: JANAF methodology, Liquid rocket engine, Mascotte test-rig, Theoretical performances.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2041
1371 Contribution of On-Site and Off-Site Processes to Greenhouse Gas (GHG) Emissions by Wastewater Treatment Plants

Authors: Laleh Yerushalmi, Fariborz Haghighat, Maziar Bani Shahabadi

Abstract:

The estimation of overall on-site and off-site greenhouse gas (GHG) emissions by wastewater treatment plants revealed that in anaerobic and hybrid treatment systems greater emissions result from off-site processes compared to on-site processes. However, in aerobic treatment systems, onsite processes make a higher contribution to the overall GHG emissions. The total GHG emissions were estimated to be 1.6, 3.3 and 3.8 kg CO2-e/kg BOD in the aerobic, anaerobic and hybrid treatment systems, respectively. In the aerobic treatment system without the recovery and use of the generated biogas, the off-site GHG emissions were 0.65 kg CO2-e/kg BOD, accounting for 40.2% of the overall GHG emissions. This value changed to 2.3 and 2.6 kg CO2-e/kg BOD, and accounted for 69.9% and 68.1% of the overall GHG emissions in the anaerobic and hybrid treatment systems, respectively. The increased off-site GHG emissions in the anaerobic and hybrid treatment systems are mainly due to material usage and energy demand in these systems. The anaerobic digester can contribute up to 100%, 55% and 60% of the overall energy needs of plants in the aerobic, anaerobic and hybrid treatment systems, respectively.

Keywords: On-site and off-site greenhouse gas (GHG)emissions, wastewater treatment plants, biogas recovery

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2164
1370 Optimal Convolutive Filters for Real-Time Detection and Arrival Time Estimation of Transient Signals

Authors: Michal Natora, Felix Franke, Klaus Obermayer

Abstract:

Linear convolutive filters are fast in calculation and in application, and thus, often used for real-time processing of continuous data streams. In the case of transient signals, a filter has not only to detect the presence of a specific waveform, but to estimate its arrival time as well. In this study, a measure is presented which indicates the performance of detectors in achieving both of these tasks simultaneously. Furthermore, a new sub-class of linear filters within the class of filters which minimize the quadratic response is proposed. The proposed filters are more flexible than the existing ones, like the adaptive matched filter or the minimum power distortionless response beamformer, and prove to be superior with respect to that measure in certain settings. Simulations of a real-time scenario confirm the advantage of these filters as well as the usefulness of the performance measure.

Keywords: Adaptive matched filter, minimum variance distortionless response, beam forming, Capon beam former, linear filters, performance measure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1522
1369 Rice Area Determination Using Landsat-Based Indices and Land Surface Temperature Values

Authors: Burçin Saltık, Levent Genç

Abstract:

In this study, it was aimed to determine a route for identification of rice cultivation areas within Thrace and Marmara regions of Turkey using remote sensing and GIS. Landsat 8 (OLI-TIRS) imageries acquired in production season of 2013 with 181/32 Path/Row number were used. Four different seasonal images were generated utilizing original bands and different transformation techniques. All images were classified individually using supervised classification techniques and Land Use Land Cover Maps (LULC) were generated with 8 classes. Areas (ha, %) of each classes were calculated. In addition, district-based rice distribution maps were developed and results of these maps were compared with Turkish Statistical Institute (TurkSTAT; TSI)’s actual rice cultivation area records. Accuracy assessments were conducted, and most accurate map was selected depending on accuracy assessment and coherency with TSI results. Additionally, rice areas on over 4° slope values were considered as mis-classified pixels and they eliminated using slope map and GIS tools. Finally, randomized rice zones were selected to obtain maximum-minimum value ranges of each date (May, June, July, August, September images separately) NDVI, LSWI, and LST images to test whether they may be used for rice area determination via raster calculator tool of ArcGIS. The most accurate classification for rice determination was obtained from seasonal LSWI LULC map, and considering TSI data and accuracy assessment results and mis-classified pixels were eliminated from this map. According to results, 83151.5 ha of rice areas exist within study area. However, this result is higher than TSI records with an area of 12702.3 ha. Use of maximum-minimum range of rice area NDVI, LSWI, and LST was tested in Meric district. It was seen that using the value ranges obtained from July imagery, gave the closest results to TSI records, and the difference was only 206.4 ha. This difference is normal due to relatively low resolution of images. Thus, employment of images with higher spectral, spatial, temporal and radiometric resolutions may provide more reliable results.

Keywords: Landsat 8 (OLI-TIRS), LULC, spectral indices, rice.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1298
1368 Mutually Independent Hamiltonian Cycles of Cn x Cn

Authors: Kai-Siou Wu, Justie Su-Tzu Juan

Abstract:

In a graph G, a cycle is Hamiltonian cycle if it contain all vertices of G. Two Hamiltonian cycles C_1 = ⟨u_0, u_1, u_2, ..., u_{n−1}, u_0⟩ and C_2 = ⟨v_0, v_1, v_2, ..., v_{n−1}, v_0⟩ in G are independent if u_0 = v_0, u_i = ̸ v_i for all 1 ≤ i ≤ n−1. In G, a set of Hamiltonian cycles C = {C_1, C_2, ..., C_k} is mutually independent if any two Hamiltonian cycles of C are independent. The mutually independent Hamiltonicity IHC(G), = k means there exist a maximum integer k such that there exists k-mutually independent Hamiltonian cycles start from any vertex of G. In this paper, we prove that IHC(C_n × C_n) = 4, for n ≥ 3.

Keywords: Hamiltonian, independent, cycle, Cartesian product, mutually independent Hamiltonicity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1280
1367 Semi-automatic Background Detection in Microscopic Images

Authors: Alessandro Bevilacqua, Alessandro Gherardi, Ludovico Carozza, Filippo Piccinini

Abstract:

The last years have seen an increasing use of image analysis techniques in the field of biomedical imaging, in particular in microscopic imaging. The basic step for most of the image analysis techniques relies on a background image free of objects of interest, whether they are cells or histological samples, to perform further analysis, such as segmentation or mosaicing. Commonly, this image consists of an empty field acquired in advance. However, many times achieving an empty field could not be feasible. Or else, this could be different from the background region of the sample really being studied, because of the interaction with the organic matter. At last, it could be expensive, for instance in case of live cell analyses. We propose a non parametric and general purpose approach where the background is built automatically stemming from a sequence of images containing even objects of interest. The amount of area, in each image, free of objects just affects the overall speed to obtain the background. Experiments with different kinds of microscopic images prove the effectiveness of our approach.

Keywords: Microscopy, flat field correction, background estimation, image segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1834
1366 Combining Color and Layout Features for the Identification of Low-resolution Documents

Authors: Ardhendu Behera, Denis Lalanne, Rolf Ingold

Abstract:

This paper proposes a method, combining color and layout features, for identifying documents captured from lowresolution handheld devices. On one hand, the document image color density surface is estimated and represented with an equivalent ellipse and on the other hand, the document shallow layout structure is computed and hierarchically represented. The combined color and layout features are arranged in a symbolic file, which is unique for each document and is called the document-s visual signature. Our identification method first uses the color information in the signatures in order to focus the search space on documents having a similar color distribution, and finally selects the document having the most similar layout structure in the remaining search space. Finally, our experiment considers slide documents, which are often captured using handheld devices.

Keywords: Document color modeling, document visual signature, kernel density estimation, document identification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1372
1365 Sparse Unmixing of Hyperspectral Data by Exploiting Joint-Sparsity and Rank-Deficiency

Authors: Fanqiang Kong, Chending Bian

Abstract:

In this work, we exploit two assumed properties of the abundances of the observed signatures (endmembers) in order to reconstruct the abundances from hyperspectral data. Joint-sparsity is the first property of the abundances, which assumes the adjacent pixels can be expressed as different linear combinations of same materials. The second property is rank-deficiency where the number of endmembers participating in hyperspectral data is very small compared with the dimensionality of spectral library, which means that the abundances matrix of the endmembers is a low-rank matrix. These assumptions lead to an optimization problem for the sparse unmixing model that requires minimizing a combined l2,p-norm and nuclear norm. We propose a variable splitting and augmented Lagrangian algorithm to solve the optimization problem. Experimental evaluation carried out on synthetic and real hyperspectral data shows that the proposed method outperforms the state-of-the-art algorithms with a better spectral unmixing accuracy.

Keywords: Hyperspectral unmixing, joint-sparse, low-rank representation, abundance estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 769
1364 Kinetic Parameter Estimation from Thermogravimetry and Microscale Combustion Calorimetry

Authors: Rhoda Afriyie Mensah, Lin Jiang, Solomon Asante-Okyere, Xu Qiang, Cong Jin

Abstract:

Flammability analysis of extruded polystyrene (XPS) has become crucial due to its utilization as insulation material for energy efficient buildings. Using the Kissinger-Akahira-Sunose and Flynn-Wall-Ozawa methods, the degradation kinetics of two pure XPS from the local market, red and grey ones, were obtained from the results of thermogravity analysis (TG) and microscale combustion calorimetry (MCC) experiments performed under the same heating rates. From the experiments, it was discovered that red XPS released more heat than grey XPS and both materials showed two mass loss stages. Consequently, the kinetic parameters for red XPS were higher than grey XPS. A comparative evaluation of activation energies from MCC and TG showed an insignificant degree of deviation signifying an equivalent apparent activation energy from both methods. However, different activation energy profiles as a result of the different chemical pathways were presented when the dependencies of the activation energies on extent of conversion for TG and MCC were compared.

Keywords: Flammability, microscale combustion calorimetry, thermogravity analysis, thermal degradation, kinetic analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 881
1363 Spatio-Temporal Analysis and Mapping of Malaria in Thailand

Authors: Krisada Lekdee, Sunee Sammatat, Nittaya Boonsit

Abstract:

This paper proposes a GLMM with spatial and temporal effects for malaria data in Thailand. A Bayesian method is used for parameter estimation via Gibbs sampling MCMC. A conditional autoregressive (CAR) model is assumed to present the spatial effects. The temporal correlation is presented through the covariance matrix of the random effects. The malaria quarterly data have been extracted from the Bureau of Epidemiology, Ministry of Public Health of Thailand. The factors considered are rainfall and temperature. The result shows that rainfall and temperature are positively related to the malaria morbidity rate. The posterior means of the estimated morbidity rates are used to construct the malaria maps. The top 5 highest morbidity rates (per 100,000 population) are in Trat (Q3, 111.70), Chiang Mai (Q3, 104.70), Narathiwat (Q4, 97.69), Chiang Mai (Q2, 88.51), and Chanthaburi (Q3, 86.82). According to the DIC criterion, the proposed model has a better performance than the GLMM with spatial effects but without temporal terms.

Keywords: Bayesian method, generalized linear mixed model (GLMM), malaria, spatial effects, temporal correlation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2146
1362 A Numerical Study on the Influence of CO2 Dilution on Combustion Characteristics of a Turbulent Diffusion Flame

Authors: Yasaman Tohidi, Rouzbeh Riazi, Shidvash Vakilipour, Masoud Mohammadi

Abstract:

The objective of the present study is to numerically investigate the effect of CO2 replacement of N2 in air stream on the flame characteristics of the CH4 turbulent diffusion flame. The Open source Field Operation and Manipulation (OpenFOAM) has been used as the computational tool. In this regard, laminar flamelet and modified k-ε models have been utilized as combustion and turbulence models, respectively. Results reveal that the presence of CO2 in air stream changes the flame shape and maximum flame temperature. Also, CO2 dilution causes an increment in CO mass fraction.

Keywords: CH4 diffusion flame, CO2 dilution, OpenFOAM, turbulent flame.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 769
1361 Deployment of a Biocompatible International Space Station into Geostationary Orbit

Authors: Tim Falk, Chris Chatwin

Abstract:

This study explores the possibility of a space station that will occupy a geostationary equatorial orbit (GEO) and create artificial gravity using centripetal acceleration. The concept of the station is to create a habitable, safe environment that can increase the possibility of space tourism by reducing the wide variation of hazards associated with space exploration. The ability to control the intensity of artificial gravity through Hall-effect thrusters will allow experiments to be carried out at different levels of artificial gravity. A feasible prototype model was built to convey the concept and to enable cost estimation. The SpaceX Falcon Heavy rocket with a 26,700 kg payload to GEO was selected to take the 675 tonne spacecraft into orbit; space station construction will require up to 30 launches, this would be reduced to 5 launches when the SpaceX BFR becomes available. The estimated total cost of implementing the Sussex Biocompatible International Space Station (BISS) is approximately $47.039 billion, which is very attractive when compared to the cost of the International Space Station, which cost $150 billion.

Keywords: Artificial gravity, biocompatible, geostationary orbit, space station.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 563
1360 The Estimation of Human Vital Signs Complexity

Authors: L. Bikulciene, E. Venskaityte, G. Jarusevicius

Abstract:

Nonstationary and nonlinear signals generated by living complex systems defy traditional mechanistic approaches, which are based on homeostasis. Previous our studies have shown that the evaluation of the interactions of physiological signals by using special analysis methods is suitable for observation of physiological processes. It is demonstrated the possibility of using deep physiological model, based on the interpretation of the changes of the human body’s functional states combined with an application of the analytical method based on matrix theory for the physiological signals analysis, which was applied on high risk cardiac patients. It is shown that evaluation of cardiac signals interactions show peculiar for each individual functional changes at the onset of hemodynamic restoration procedure. Therefore, we suggest that the alterations of functional state of the body, after patients overcome surgery can be complemented by the data received from the suggested approach of the evaluation of functional variables’ interactions.

Keywords: Cardiac diseases, Complex systems theory, ECG analysis, matrix analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2245
1359 Nodal Load Profiles Estimation for Time Series Load Flow Using Independent Component Analysis

Authors: Mashitah Mohd Hussain, Salleh Serwan, Zuhaina Hj Zakaria

Abstract:

This paper presents a method to estimate load profile in a multiple power flow solutions for every minutes in 24 hours per day. A method to calculate multiple solutions of non linear profile is introduced. The Power System Simulation/Engineering (PSS®E) and python has been used to solve the load power flow. The result of this power flow solutions has been used to estimate the load profiles for each load at buses using Independent Component Analysis (ICA) without any knowledge of parameter and network topology of the systems. The proposed algorithm is tested with IEEE 69 test bus system represents for distribution part and the method of ICA has been programmed in MATLAB R2012b version. Simulation results and errors of estimations are discussed in this paper.

Keywords: Electrical Distribution System, Power Flow Solution, Distribution Network, Independent Component Analysis, Newton Raphson, Power System Simulation for Engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2914