Search results for: conjugate dirichlet kernel
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 395

Search results for: conjugate dirichlet kernel

335 Focus-Latent Dirichlet Allocation for Aspect-Level Opinion Mining

Authors: Mohsen Farhadloo, Majid Farhadloo

Abstract:

Aspect-level opinion mining that aims at discovering aspects (aspect identification) and their corresponding ratings (sentiment identification) from customer reviews have increasingly attracted attention of researchers and practitioners as it provides valuable insights about products/services from customer's points of view. Instead of addressing aspect identification and sentiment identification in two separate steps, it is possible to simultaneously identify both aspects and sentiments. In recent years many graphical models based on Latent Dirichlet Allocation (LDA) have been proposed to solve both aspect and sentiment identifications in a single step. Although LDA models have been effective tools for the statistical analysis of document collections, they also have shortcomings in addressing some unique characteristics of opinion mining. Our goal in this paper is to address one of the limitations of topic models to date; that is, they fail to directly model the associations among topics. Indeed in many text corpora, it is natural to expect that subsets of the latent topics have higher probabilities. We propose a probabilistic graphical model called focus-LDA, to better capture the associations among topics when applied to aspect-level opinion mining. Our experiments on real-life data sets demonstrate the improved effectiveness of the focus-LDA model in terms of the accuracy of the predictive distributions over held out documents. Furthermore, we demonstrate qualitatively that the focus-LDA topic model provides a natural way of visualizing and exploring unstructured collection of textual data.

Keywords: aspect-level opinion mining, document modeling, Latent Dirichlet Allocation, LDA, sentiment analysis

Procedia PDF Downloads 73
334 Evaluating Traffic Congestion Using the Bayesian Dirichlet Process Mixture of Generalized Linear Models

Authors: Ren Moses, Emmanuel Kidando, Eren Ozguven, Yassir Abdelrazig

Abstract:

This study applied traffic speed and occupancy to develop clustering models that identify different traffic conditions. Particularly, these models are based on the Dirichlet Process Mixture of Generalized Linear regression (DML) and change-point regression (CR). The model frameworks were implemented using 2015 historical traffic data aggregated at a 15-minute interval from an Interstate 295 freeway in Jacksonville, Florida. Using the deviance information criterion (DIC) to identify the appropriate number of mixture components, three traffic states were identified as free-flow, transitional, and congested condition. Results of the DML revealed that traffic occupancy is statistically significant in influencing the reduction of traffic speed in each of the identified states. Influence on the free-flow and the congested state was estimated to be higher than the transitional flow condition in both evening and morning peak periods. Estimation of the critical speed threshold using CR revealed that 47 mph and 48 mph are speed thresholds for congested and transitional traffic condition during the morning peak hours and evening peak hours, respectively. Free-flow speed thresholds for morning and evening peak hours were estimated at 64 mph and 66 mph, respectively. The proposed approaches will facilitate accurate detection and prediction of traffic congestion for developing effective countermeasures.

Keywords: traffic congestion, multistate speed distribution, traffic occupancy, Dirichlet process mixtures of generalized linear model, Bayesian change-point detection

Procedia PDF Downloads 264
333 The Linear Combination of Kernels in the Estimation of the Cumulative Distribution Functions

Authors: Abdel-Razzaq Mugdadi, Ruqayyah Sani

Abstract:

The Kernel Distribution Function Estimator (KDFE) method is the most popular method for nonparametric estimation of the cumulative distribution function. The kernel and the bandwidth are the most important components of this estimator. In this investigation, we replace the kernel in the KDFE with a linear combination of kernels to obtain a new estimator based on the linear combination of kernels, the mean integrated squared error (MISE), asymptotic mean integrated squared error (AMISE) and the asymptotically optimal bandwidth for the new estimator are derived. We propose a new data-based method to select the bandwidth for the new estimator. The new technique is based on the Plug-in technique in density estimation. We evaluate the new estimator and the new technique using simulations and real-life data.

Keywords: estimation, bandwidth, mean square error, cumulative distribution function

Procedia PDF Downloads 544
332 A Boundary Backstepping Control Design for 2-D, 3-D and N-D Heat Equation

Authors: Aziz Sezgin

Abstract:

We consider the problem of stabilization of an unstable heat equation in a 2-D, 3-D and generally n-D domain by deriving a generalized backstepping boundary control design methodology. To stabilize the systems, we design boundary backstepping controllers inspired by the 1-D unstable heat equation stabilization procedure. We assume that one side of the boundary is hinged and the other side is controlled for each direction of the domain. Thus, controllers act on two boundaries for 2-D domain, three boundaries for 3-D domain and ”n” boundaries for n-D domain. The main idea of the design is to derive ”n” controllers for each of the dimensions by using ”n” kernel functions. Thus, we obtain ”n” controllers for the ”n” dimensional case. We use a transformation to change the system into an exponentially stable ”n” dimensional heat equation. The transformation used in this paper is a generalized Volterra/Fredholm type with ”n” kernel functions for n-D domain instead of the one kernel function of 1-D design.

Keywords: backstepping, boundary control, 2-D, 3-D, n-D heat equation, distributed parameter systems

Procedia PDF Downloads 375
331 Numerical Studies for Standard Bi-Conjugate Gradient Stabilized Method and the Parallel Variants for Solving Linear Equations

Authors: Kuniyoshi Abe

Abstract:

Bi-conjugate gradient (Bi-CG) is a well-known method for solving linear equations Ax = b, for x, where A is a given n-by-n matrix, and b is a given n-vector. Typically, the dimension of the linear equation is high and the matrix is sparse. A number of hybrid Bi-CG methods such as conjugate gradient squared (CGS), Bi-CG stabilized (Bi-CGSTAB), BiCGStab2, and BiCGstab(l) have been developed to improve the convergence of Bi-CG. Bi-CGSTAB has been most often used for efficiently solving the linear equation, but we have seen the convergence behavior with a long stagnation phase. In such cases, it is important to have Bi-CG coefficients that are as accurate as possible, and the stabilization strategy, which stabilizes the computation of the Bi-CG coefficients, has been proposed. It may avoid stagnation and lead to faster computation. Motivated by a large number of processors in present petascale high-performance computing hardware, the scalability of Krylov subspace methods on parallel computers has recently become increasingly prominent. The main bottleneck for efficient parallelization is the inner products which require a global reduction. The resulting global synchronization phases cause communication overhead on parallel computers. The parallel variants of Krylov subspace methods reducing the number of global communication phases and hiding the communication latency have been proposed. However, the numerical stability, specifically, the convergence speed of the parallel variants of Bi-CGSTAB may become worse than that of the standard Bi-CGSTAB. In this paper, therefore, we compare the convergence speed between the standard Bi-CGSTAB and the parallel variants by numerical experiments and show that the convergence speed of the standard Bi-CGSTAB is faster than the parallel variants. Moreover, we propose the stabilization strategy for the parallel variants.

Keywords: bi-conjugate gradient stabilized method, convergence speed, Krylov subspace methods, linear equations, parallel variant

Procedia PDF Downloads 138
330 Nonlinear Equations with n-Dimensional Telegraph Operator Iterated K-Times

Authors: Jessada Tariboon

Abstract:

In this article, using distribution kernel, we study the nonlinear equations with n-dimensional telegraph operator iterated k-times.

Keywords: telegraph operator, elementary solution, distribution kernel, nonlinear equations

Procedia PDF Downloads 461
329 Hardware Error Analysis and Severity Characterization in Linux-Based Server Systems

Authors: Nikolaos Georgoulopoulos, Alkis Hatzopoulos, Konstantinos Karamitsios, Konstantinos Kotrotsios, Alexandros I. Metsai

Abstract:

In modern server systems, business critical applications run in different types of infrastructure, such as cloud systems, physical machines and virtualization. Often, due to high load and over time, various hardware faults occur in servers that translate to errors, resulting to malfunction or even server breakdown. CPU, RAM and hard drive (HDD) are the hardware parts that concern server administrators the most regarding errors. In this work, selected RAM, HDD and CPU errors, that have been observed or can be simulated in kernel ring buffer log files from two groups of Linux servers, are investigated. Moreover, a severity characterization is given for each error type. Better understanding of such errors can lead to more efficient analysis of kernel logs that are usually exploited for fault diagnosis and prediction. In addition, this work summarizes ways of simulating hardware errors in RAM and HDD, in order to test the error detection and correction mechanisms of a Linux server.

Keywords: hardware errors, Kernel logs, Linux servers, RAM, hard disk, CPU

Procedia PDF Downloads 123
328 Text Mining of Twitter Data Using a Latent Dirichlet Allocation Topic Model and Sentiment Analysis

Authors: Sidi Yang, Haiyi Zhang

Abstract:

Twitter is a microblogging platform, where millions of users daily share their attitudes, views, and opinions. Using a probabilistic Latent Dirichlet Allocation (LDA) topic model to discern the most popular topics in the Twitter data is an effective way to analyze a large set of tweets to find a set of topics in a computationally efficient manner. Sentiment analysis provides an effective method to show the emotions and sentiments found in each tweet and an efficient way to summarize the results in a manner that is clearly understood. The primary goal of this paper is to explore text mining, extract and analyze useful information from unstructured text using two approaches: LDA topic modelling and sentiment analysis by examining Twitter plain text data in English. These two methods allow people to dig data more effectively and efficiently. LDA topic model and sentiment analysis can also be applied to provide insight views in business and scientific fields.

Keywords: text mining, Twitter, topic model, sentiment analysis

Procedia PDF Downloads 150
327 Text Based Shuffling Algorithm on Graphics Processing Unit for Digital Watermarking

Authors: Zayar Phyo, Ei Chaw Htoon

Abstract:

In a New-LSB based Steganography method, the Fisher-Yates algorithm is used to permute an existing array randomly. However, that algorithm performance became slower and occurred memory overflow problem while processing the large dimension of images. Therefore, the Text-Based Shuffling algorithm aimed to select only necessary pixels as hiding characters at the specific position of an image according to the length of the input text. In this paper, the enhanced text-based shuffling algorithm is presented with the powered of GPU to improve more excellent performance. The proposed algorithm employs the OpenCL Aparapi framework, along with XORShift Kernel including the Pseudo-Random Number Generator (PRNG) Kernel. PRNG is applied to produce random numbers inside the kernel of OpenCL. The experiment of the proposed algorithm is carried out by practicing GPU that it can perform faster-processing speed and better efficiency without getting the disruption of unnecessary operating system tasks.

Keywords: LSB based steganography, Fisher-Yates algorithm, text-based shuffling algorithm, OpenCL, XORShiftKernel

Procedia PDF Downloads 121
326 Polymer Nanocarrier for Rheumatoid Arthritis Therapy

Authors: Vijayakameswara Rao Neralla, Jueun Jeon, Jae Hyung Park

Abstract:

To develop a potential nanocarrier for diagnosis and treatment of rheumatoid arthritis (RA), we prepared a hyaluronic acid (HA)-5β-cholanic acid (CA) conjugate with an acid-labile ketal linker. This conjugate could self-assemble in aqueous conditions to produce pH-responsive HA-CA nanoparticles as potential carriers of the anti-inflammatory drug methotrexate (MTX). MTX was rapidly released from nanoparticles under inflamed synovial tissue in RA. In vitro cytotoxicity data showed that pH-responsive HA-CA nanoparticles were non-toxic to RAW 264.7 cells. In vivo biodistribution results confirmed that, after their systemic administration, pH-responsive HA-CA nanoparticles selectively accumulated in the inflamed joints of collagen-induced arthritis mice. These results indicate that pH-responsive HA-CA nanoparticles represent a promising candidate as a drug carrier for RA therapy.

Keywords: rheumatoid arthritis, hyaluronic acid, nanocarrier, self-assembly, MTX

Procedia PDF Downloads 269
325 Analysis of Pangasinan State University: Bayambang Students’ Concerns Through Social Media Analytics and Latent Dirichlet Allocation Topic Modelling Approach

Authors: Matthew John F. Sino Cruz, Sarah Jane M. Ferrer, Janice C. Francisco

Abstract:

COVID-19 pandemic has affected more than 114 countries all over the world since it was considered a global health concern in 2020. Different sectors, including education, have shifted to remote/distant setups to follow the guidelines set to prevent the spread of the disease. One of the higher education institutes which shifted to remote setup is the Pangasinan State University (PSU). In order to continue providing quality instructions to the students, PSU designed Flexible Learning Model to still provide services to its stakeholders amidst the pandemic. The model covers the redesigning of delivering instructions in remote setup and the technology needed to support these adjustments. The primary goal of this study is to determine the insights of the PSU – Bayambang students towards the remote setup implemented during the pandemic and how they perceived the initiatives employed in relation to their experiences in flexible learning. In this study, the topic modelling approach was implemented using Latent Dirichlet Allocation. The dataset used in the study. The results show that the most common concern of the students includes time and resource management, poor internet connection issues, and difficulty coping with the flexible learning modality. Furthermore, the findings of the study can be used as one of the bases for the administration to review and improve the policies and initiatives implemented during the pandemic in relation to remote service delivery. In addition, further studies can be conducted to determine the overall sentiment of the other stakeholders in the policies implemented at the University.

Keywords: COVID-19, topic modelling, students’ sentiment, flexible learning, Latent Dirichlet allocation

Procedia PDF Downloads 90
324 Parameter Estimation for the Mixture of Generalized Gamma Model

Authors: Wikanda Phaphan

Abstract:

Mixture generalized gamma distribution is a combination of two distributions: generalized gamma distribution and length biased generalized gamma distribution. These two distributions were presented by Suksaengrakcharoen and Bodhisuwan in 2014. The findings showed that probability density function (pdf) had fairly complexities, so it made problems in estimating parameters. The problem occurred in parameter estimation was that we were unable to calculate estimators in the form of critical expression. Thus, we will use numerical estimation to find the estimators. In this study, we presented a new method of the parameter estimation by using the expectation – maximization algorithm (EM), the conjugate gradient method, and the quasi-Newton method. The data was generated by acceptance-rejection method which is used for estimating α, β, λ and p. λ is the scale parameter, p is the weight parameter, α and β are the shape parameters. We will use Monte Carlo technique to find the estimator's performance. Determining the size of sample equals 10, 30, 100; the simulations were repeated 20 times in each case. We evaluated the effectiveness of the estimators which was introduced by considering values of the mean squared errors and the bias. The findings revealed that the EM-algorithm had proximity to the actual values determined. Also, the maximum likelihood estimators via the conjugate gradient and the quasi-Newton method are less precision than the maximum likelihood estimators via the EM-algorithm.

Keywords: conjugate gradient method, quasi-Newton method, EM-algorithm, generalized gamma distribution, length biased generalized gamma distribution, maximum likelihood method

Procedia PDF Downloads 198
323 Classification of Barley Varieties by Artificial Neural Networks

Authors: Alper Taner, Yesim Benal Oztekin, Huseyin Duran

Abstract:

In this study, an Artificial Neural Network (ANN) was developed in order to classify barley varieties. For this purpose, physical properties of barley varieties were determined and ANN techniques were used. The physical properties of 8 barley varieties grown in Turkey, namely thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain, were determined and it was found that these properties were statistically significant with respect to varieties. As ANN model, three models, N-l, N-2 and N-3 were constructed. The performances of these models were compared. It was determined that the best-fit model was N-1. In the N-1 model, the structure of the model was designed to be 11 input layers, 2 hidden layers and 1 output layer. Thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain were used as input parameter; and varieties as output parameter. R2, Root Mean Square Error and Mean Error for the N-l model were found as 99.99%, 0.00074 and 0.009%, respectively. All results obtained by the N-l model were observed to have been quite consistent with real data. By this model, it would be possible to construct automation systems for classification and cleaning in flourmills.

Keywords: physical properties, artificial neural networks, barley, classification

Procedia PDF Downloads 146
322 A Numerical Investigation of Total Temperature Probes Measurement Performance

Authors: Erdem Meriç

Abstract:

Measuring total temperature of air flow accurately is a very important requirement in the development phases of many industrial products, including gas turbines and rockets. Thermocouples are very practical devices to measure temperature in such cases, but in high speed and high temperature flows, the temperature of thermocouple junction may deviate considerably from real flow total temperature due to the effects of heat transfer mechanisms of convection, conduction, and radiation. To avoid errors in total temperature measurement, special probe designs which are experimentally characterized are used. In this study, a validation case which is an experimental characterization of a specific class of total temperature probes is selected from the literature to develop a numerical conjugate heat transfer analysis methodology to study the total temperature probe flow field and solid temperature distribution. Validated conjugate heat transfer methodology is used to investigate flow structures inside and around the probe and effects of probe design parameters like the ratio between inlet and outlet hole areas and prob tip geometry on measurement accuracy. Lastly, a thermal model is constructed to account for errors in total temperature measurement for a specific class of probes in different operating conditions. Outcomes of this work can guide experimentalists to design a very accurate total temperature probe and quantify the possible error for their specific case.

Keywords: conjugate heat transfer, recovery factor, thermocouples, total temperature probes

Procedia PDF Downloads 105
321 SVM-Based Modeling of Mass Transfer Potential of Multiple Plunging Jets

Authors: Surinder Deswal, Mahesh Pal

Abstract:

The paper investigates the potential of support vector machines based regression approach to model the mass transfer capacity of multiple plunging jets, both vertical (θ = 90°) and inclined (θ = 60°). The data set used in this study consists of four input parameters with a total of eighty eight cases. For testing, tenfold cross validation was used. Correlation coefficient values of 0.971 and 0.981 (root mean square error values of 0.0025 and 0.0020) were achieved by using polynomial and radial basis kernel functions based support vector regression respectively. Results suggest an improved performance by radial basis function in comparison to polynomial kernel based support vector machines. The estimated overall mass transfer coefficient, by both the kernel functions, is in good agreement with actual experimental values (within a scatter of ±15 %); thereby suggesting the utility of support vector machines based regression approach.

Keywords: mass transfer, multiple plunging jets, support vector machines, ecological sciences

Procedia PDF Downloads 424
320 Sorting Maize Haploids from Hybrids Using Single-Kernel Near-Infrared Spectroscopy

Authors: Paul R Armstrong

Abstract:

Doubled haploids (DHs) have become an important breeding tool for creating maize inbred lines, although several bottlenecks in the DH production process limit wider development, application, and adoption of the technique. DH kernels are typically sorted manually and represent about 10% of the seeds in a much larger pool where the remaining 90% are hybrid siblings. This introduces time constraints on DH production and manual sorting is often not accurate. Automated sorting based on the chemical composition of the kernel can be effective, but devices, namely NMR, have not achieved the sorting speed to be a cost-effective replacement to manual sorting. This study evaluated a single kernel near-infrared reflectance spectroscopy (skNIR) platform to accurately identify DH kernels based on oil content. The skNIR platform is a higher-throughput device, approximately 3 seeds/s, that uses spectra to predict oil content of each kernel from maize crosses intentionally developed to create larger than normal oil differences, 1.5%-2%, between DH and hybrid kernels. Spectra from the skNIR were used to construct a partial least squares regression (PLS) model for oil and for a categorical reference model of 1 (DH kernel) or 2 (hybrid kernel) and then used to sort several crosses to evaluate performance. Two approaches were used for sorting. The first used a general PLS model developed from all crosses to predict oil content and then used for sorting each induction cross, the second was the development of a specific model from a single induction cross where approximately fifty DH and one hundred hybrid kernels used. This second approach used a categorical reference value of 1 and 2, instead of oil content, for the PLS model and kernels selected for the calibration set were manually referenced based on traditional commercial methods using coloration of the tip cap and germ areas. The generalized PLS oil model statistics were R2 = 0.94 and RMSE = .93% for kernels spanning an oil content of 2.7% to 19.3%. Sorting by this model resulted in extracting 55% to 85% of haploid kernels from the four induction crosses. Using the second method of generating a model for each cross yielded model statistics ranging from R2s = 0.96 to 0.98 and RMSEs from 0.08 to 0.10. Sorting in this case resulted in 100% correct classification but required models that were cross. In summary, the first generalized model oil method could be used to sort a significant number of kernels from a kernel pool but was not close to the accuracy of developing a sorting model from a single cross. The penalty for the second method is that a PLS model would need to be developed for each individual cross. In conclusion both methods could find useful application in the sorting of DH from hybrid kernels.

Keywords: NIR, haploids, maize, sorting

Procedia PDF Downloads 278
319 Building Scalable and Accurate Hybrid Kernel Mapping Recommender

Authors: Hina Iqbal, Mustansar Ali Ghazanfar, Sandor Szedmak

Abstract:

Recommender systems uses artificial intelligence practices for filtering obscure information and can predict if a user likes a specified item. Kernel mapping Recommender systems have been proposed which are accurate and state-of-the-art algorithms and resolve recommender system’s design objectives such as; long tail, cold-start, and sparsity. The aim of research is to propose hybrid framework that can efficiently integrate different versions— namely item-based and user-based KMR— of KMR algorithm. We have proposed various heuristic algorithms that integrate different versions of KMR (into a unified framework) resulting in improved accuracy and elimination of problems associated with conventional recommender system. We have tested our system on publically available movies dataset and benchmark with KMR. The results (in terms of accuracy, precision, recall, F1 measure and ROC metrics) reveal that the proposed algorithm is quite accurate especially under cold-start and sparse scenarios.

Keywords: Kernel Mapping Recommender Systems, hybrid recommender systems, cold start, sparsity, long tail

Procedia PDF Downloads 309
318 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.

Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter

Procedia PDF Downloads 305
317 Angular Correlation and Independent Particle Model in Two-Electron Atomic Systems

Authors: Tokuei Sako

Abstract:

The ground and low-lying singly-excited states of He and He-like atomic ions have been studied by the Full Configuration Interaction (FCI) method focusing on the angular correlation between two electrons in the studied systems. The two-electron angle density distribution obtained by integrating the square-modulus of the FCI wave function over the coordinates other than the interelectronic angle shows a distinct trend between the singlet-triplet pair of states for different values of the nuclear charge Zn. Further, both of these singlet and triplet distributions tend to show an increasingly stronger dependence on the interelectronic angle as Zn increases, in contrast to the well-known fact that the correlation energy approaches towards zero for increasing Zn. This controversial observation has been rationalized on the basis of the recently introduced concept of so-called conjugate Fermi holes.

Keywords: He-like systems, angular correlation, configuration interaction wave function, conjugate Fermi hole

Procedia PDF Downloads 385
316 Effects of Palm Kernel Expeller Processing on the Ileal Populations of Lactobacilli and Escherichia Coli in Broiler Chickens

Authors: B. Navidshad

Abstract:

The main objective of this study was to examine the effects of enzymatic treatment and shell content of palm kernel expeller (PKE) on the ileal Lactobacilli and Escherichia coli populations in broiler chickens. At the finisher phase, one hundred male broiler chickens (Cobb-500) were fed a control diet or the diets containing 200 g/kg of normal PKE (70 g/kg shell), low shell PKE (30 g/kg shell), enzymatic treated PKE or low shell-enzymatic treated PKE. The quantitative real-time PCR were used to determine the ileal bacteria populations. The lowest ileal Lactobacilli population was found in the chickens fed the low shell PKE diet. Dietary normal PKE or low shell-enzymatic treated PKE decreased the Escherichia coli population compared to the control diet. The results suggested that PKE could be included up to 200 g/kg in the finisher diet, however, any screening practice to reduce the shell content of PKE without enzymatic degradation of β-mannan, decrease ileal Lactobacilli population.

Keywords: palm kernel expeller, exogenous enzyme, shell content, ileum bacteria, broiler chickens

Procedia PDF Downloads 326
315 Mathematical Modeling of the AMCs Cross-Contamination Removal in the FOUPs: Finite Element Formulation and Application in FOUP’s Decontamination

Authors: N. Santatriniaina, J. Deseure, T. Q. Nguyen, H. Fontaine, C. Beitia, L. Rakotomanana

Abstract:

Nowadays, with the increasing of the wafer's size and the decreasing of critical size of integrated circuit manufacturing in modern high-tech, microelectronics industry needs a maximum attention to challenge the contamination control. The move to 300 mm is accompanied by the use of Front Opening Unified Pods for wafer and his storage. In these pods an airborne cross contamination may occur between wafers and the pods. A predictive approach using modeling and computational methods is very powerful method to understand and qualify the AMCs cross contamination processes. This work investigates the required numerical tools which are employed in order to study the AMCs cross-contamination transfer phenomena between wafers and FOUPs. Numerical optimization and finite element formulation in transient analysis were established. Analytical solution of one dimensional problem was developed and the calibration process of physical constants was performed. The least square distance between the model (analytical 1D solution) and the experimental data are minimized. The behavior of the AMCs intransient analysis was determined. The model framework preserves the classical forms of the diffusion and convection-diffusion equations and yields to consistent form of the Fick's law. The adsorption process and the surface roughness effect were also traduced as a boundary condition using the switch condition Dirichlet to Neumann and the interface condition. The methodology is applied, first using the optimization methods with analytical solution to define physical constants, and second using finite element method including adsorption kinetic and the switch of Dirichlet to Neumann condition.

Keywords: AMCs, FOUP, cross-contamination, adsorption, diffusion, numerical analysis, wafers, Dirichlet to Neumann, finite elements methods, Fick’s law, optimization

Procedia PDF Downloads 472
314 On the Grid Technique by Approximating the Derivatives of the Solution of the Dirichlet Problems for (1+1) Dimensional Linear Schrodinger Equation

Authors: Lawrence A. Farinola

Abstract:

Four point implicit schemes for the approximation of the first and pure second order derivatives for the solution of the Dirichlet problem for one dimensional Schrodinger equation with respect to the time variable t were constructed. Also, special four-point implicit difference boundary value problems are proposed for the first and pure second derivatives of the solution with respect to the spatial variable x. The Grid method is also applied to the mixed second derivative of the solution of the Linear Schrodinger time-dependent equation. It is assumed that the initial function belongs to the Holder space C⁸⁺ᵃ, 0 < α < 1, the Schrodinger wave function given in the Schrodinger equation is from the Holder space Cₓ,ₜ⁶⁺ᵃ, ³⁺ᵃ/², the boundary functions are from C⁴⁺ᵃ, and between the initial and the boundary functions the conjugation conditions of orders q = 0,1,2,3,4 are satisfied. It is proven that the solution of the proposed difference schemes converges uniformly on the grids of the order O(h²+ k) where h is the step size in x and k is the step size in time. Numerical experiments are illustrated to support the analysis made.

Keywords: approximation of derivatives, finite difference method, Schrödinger equation, uniform error

Procedia PDF Downloads 100
313 On Boundary Values of Hardy Space Banach Space-Valued Functions

Authors: Irina Peterburgsky

Abstract:

Let T be a unit circumference of a complex plane, E be a Banach space, E* and E** be its conjugate and second conjugate, respectively. In general, a Hardy space Hp(E), p ≥1, where functions act from the open unit disk to E, could contain a function for which even weak nontangential (angular) boundary value in the space E** does not exist at any point of the unit circumference T (C. Grossetete.) The situation is "better" when certain restrictions to the Banach space of values are applied (more or less resembling a classical case of scalar-valued functions depending on constrains, as shown by R. Ryan.) This paper shows that, nevertheless, in the case of a Banach space of a general type, the following positive statement is true: Proposition. For any function f(z) from Hp(E), p ≥ 1, there exists a function F(eiθ) on the unit circumference T to E** whose Poisson (in the Pettis sense) is integral regains the function f(z) on the open unit disk. Some characteristics of the function F(eiθ) are demonstrated.

Keywords: hardy spaces, Banach space-valued function, boundary values, Pettis integral

Procedia PDF Downloads 214
312 Optimization of the Flexural Strength of Biocomposites Samples Reinforced with Resin for Engineering Applications

Authors: Stephen Akong Takim

Abstract:

This study focused on the optimization of the flexural strength of bio-composite samples of palm kernel, whelks, clams, periwinkles shells and bamboo fiber reinforced with resin for engineering applications. The aim of the study was to formulate different samples of bio-composite reinforced with resin for engineering applications and to evaluate the flexural strength of the fabricated composite. The hand lay-up technique was used for the composites produced by incorporating different percentage compositions of the shells/fiber (10%, 15%, 20%, 25% and 30%) into varied proportions of epoxy resin and catalyst. The cured samples, after 24 hours, were subjected to tensile, impact, flexural and water absorption tests. The experiments were conducted using the Taguchi optimization method L25 (5x5) with five design parameters and five level combinations in Minitab 18 statistical software. The results showed that the average value of flexural was 114.87MPa when compared to the unreinforced 72.33MPa bio-composite. The study recommended that agricultural waste, like palm kernel shells, whelk shells, clams, periwinkle shells and bamboo fiber, should be converted into important engineering applications.

Keywords: bio-composite, resin, palm kernel shells, welk shells, periwinkle shells, bamboo fiber, Taguchi techniques and engineering application

Procedia PDF Downloads 46
311 Use of Biomass as Co-Fuel in Briquetting of Low-Rank Coal: Strengthen the Energy Supply and Save the Environment

Authors: Mahidin, Yanna Syamsuddin, Samsul Rizal

Abstract:

In order to fulfill world energy demand, several efforts have been done to look for new and renewable energy candidates to substitute oil and gas. Biomass is one of new and renewable energy sources, which is abundant in Indonesia. Palm kernel shell is a kind of biomass discharge from palm oil industries as a waste. On the other hand, Jatropha curcas that is easy to grow in Indonesia is also a typical energy source either for bio-diesel or biomass. In this study, biomass was used as co-fuel in briquetting of low-rank coal to suppress the release of emission (such as CO, NOx and SOx) during coal combustion. Desulfurizer, CaO-base, was also added to ensure the SOx capture is effectively occurred. Ratio of coal to palm kernel shell (w/w) in the bio-briquette were 50:50, 60:40, 70:30, 80:20 and 90:10, while ratio of calcium to sulfur (Ca/S) in mole/mole were 1:1; 1.25:1; 1.5:1; 1.75:1 and 2:1. The bio-briquette then subjected to physical characterization and combustion test. The results show that the maximum weight loss in the durability measurement was ±6%. In addition, the highest stove efficiency for each desulfurizer was observed at the coal/PKS ratio of 90:10 and Ca/S ratio of 1:1 (except for the scallop shell desulfurizer that appeared at two Ca/S ratios; 1.25:1 and 1.5:1, respectively), i.e. 13.8% for the lime; 15.86% for the oyster shell; 14.54% for the scallop shell and 15.84% for the green mussel shell desulfurizers.

Keywords: biomass, low-rank coal, bio-briquette, new and renewable energy, palm kernel shell

Procedia PDF Downloads 416
310 Fluidized-Bed Combustion of Biomass with Elevated Alkali Content: A Comparative Study between Two Alternative Bed Materials

Authors: P. Ninduangdee, V. I. Kuprianov

Abstract:

Palm kernel shell is an important bioenergy resource in Thailand. However, due to elevated alkali content in biomass ash, this oil palm residue shows high tendency to bed agglomeration in a fluidized-bed combustion system using conventional bed material (silica sand). In this study, palm kernel shell was burned in the conical fluidized-bed combustor (FBC) using alumina and dolomite as alternative bed materials to prevent bed agglomeration. For each bed material, the combustion tests were performed at 45kg/h fuel feed rate with excess air within 20–80%. Experimental results revealed rather weak effects of the bed material type but substantial influence of excess air on the behaviour of temperature, O2, CO, CxHy, and NO inside the reactor, as well as on the combustion efficiency and major gaseous emissions of the conical FBC. The optimal level of excess air ensuring high combustion efficiency (about 98.5%) and acceptable level of the emissions was found to be about 40% when using alumina and 60% with dolomite. By using these alternative bed materials, bed agglomeration can be prevented when burning the shell in the proposed conical FBC. However, both bed materials exhibited significant changes in their morphological, physical and chemical properties in the course of the time.

Keywords: palm kernel shell, fluidized-bed combustion, alternative bed materials, combustion and emission performance, bed agglomeration prevention

Procedia PDF Downloads 225
309 An Approach to Apply Kernel Density Estimation Tool for Crash Prone Location Identification

Authors: Kazi Md. Shifun Newaz, S. Miaji, Shahnewaz Hazanat-E-Rabbi

Abstract:

In this study, the kernel density estimation tool has been used to identify most crash prone locations in a national highway of Bangladesh. Like other developing countries, in Bangladesh road traffic crashes (RTC) have now become a great social alarm and the situation is deteriorating day by day. Today’s black spot identification process is not based on modern technical tools and most of the cases provide wrong output. In this situation, characteristic analysis and black spot identification by spatial analysis would be an effective and low cost approach in ensuring road safety. The methodology of this study incorporates a framework on the basis of spatial-temporal study to identify most RTC occurrence locations. In this study, a very important and economic corridor like Dhaka to Sylhet highway has been chosen to apply the method. This research proposes that KDE method for identification of Hazardous Road Location (HRL) could be used for all other National highways in Bangladesh and also for other developing countries. Some recommendations have been suggested for policy maker to reduce RTC in Dhaka-Sylhet especially in black spots.

Keywords: hazardous road location (HRL), crash, GIS, kernel density

Procedia PDF Downloads 274
308 Estimating Destinations of Bus Passengers Using Smart Card Data

Authors: Hasik Lee, Seung-Young Kho

Abstract:

Nowadays, automatic fare collection (AFC) system is widely used in many countries. However, smart card data from many of cities does not contain alighting information which is necessary to build OD matrices. Therefore, in order to utilize smart card data, destinations of passengers should be estimated. In this paper, kernel density estimation was used to forecast probabilities of alighting stations of bus passengers and applied to smart card data in Seoul, Korea which contains boarding and alighting information. This method was also validated with actual data. In some cases, stochastic method was more accurate than deterministic method. Therefore, it is sufficiently accurate to be used to build OD matrices.

Keywords: destination estimation, Kernel density estimation, smart card data, validation

Procedia PDF Downloads 324
307 Preliminary Results on a Maximum Mean Discrepancy Approach for Seizure Detection

Authors: Boumediene Hamzi, Turky N. AlOtaiby, Saleh AlShebeili, Arwa AlAnqary

Abstract:

We introduce a data-driven method for seizure detection drawing on recent progress in Machine Learning. The method is based on embedding probability measures in a high (or infinite) dimensional reproducing kernel Hilbert space (RKHS) where the Maximum Mean Discrepancy (MMD) is computed. The MMD is metric between probability measures that are computed as the difference between the means of probability measures after being embedded in an RKHS. Working in RKHS provides a convenient, general functional-analytical framework for theoretical understanding of data. We apply this approach to the problem of seizure detection.

Keywords: kernel methods, maximum mean discrepancy, seizure detection, machine learning

Procedia PDF Downloads 207
306 An Activatable Theranostic for Targeted Cancer Therapy and Imaging

Authors: Sankarprasad Bhuniya, Sukhendu Maiti, Eun-Joong Kim, Hyunseung Lee, Jonathan L. Sessler, Kwan Soo Hong, Jong Seung Kim

Abstract:

A new theranostic strategy is described. It is based on the use of an “all in one” prodrug, namely the biotinylated piperazine-rhodol conjugate 4a. This conjugate, which incorporates the anticancer drug SN-38, undergoes self-immolative cleavage when exposed to biological thiols. This leads to the tumor-targeted release of the active SN-38 payload along with fluorophore 1a. This release is made selective as the result of the biotin functionality. Fluorophore 1a is 32-fold more fluorescent than prodrug 4a. It permits the delivery and release of the SN-38 payload to be monitored easily in vitro and in vivo, as inferred from cell studies and ex vivo analyses of mice xenografts derived HeLa cells, respectively. Prodrug 4a also displays anticancer activity in the HeLa cell murine xenograft tumor model. On the basis of these findings we suggest that the present strategy, which combines within a single agent the key functions of targeting, release, imaging, and treatment, may have a role to play in cancer diagnosis and therapy.

Keywords: theranostic, prodrug, cancer therapy, fluorescence

Procedia PDF Downloads 512