Search results for: Kernel Method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18589

Search results for: Kernel Method

18529 Physically Informed Kernels for Wave Loading Prediction

Authors: Daniel James Pitchforth, Timothy James Rogers, Ulf Tyge Tygesen, Elizabeth Jane Cross

Abstract:

Wave loading is a primary cause of fatigue within offshore structures and its quantification presents a challenging and important subtask within the SHM framework. The accurate representation of physics in such environments is difficult, however, driving the development of data-driven techniques in recent years. Within many industrial applications, empirical laws remain the preferred method of wave loading prediction due to their low computational cost and ease of implementation. This paper aims to develop an approach that combines data-driven Gaussian process models with physical empirical solutions for wave loading, including Morison’s Equation. The aim here is to incorporate physics directly into the covariance function (kernel) of the Gaussian process, enforcing derived behaviors whilst still allowing enough flexibility to account for phenomena such as vortex shedding, which may not be represented within the empirical laws. The combined approach has a number of advantages, including improved performance over either component used independently and interpretable hyperparameters.

Keywords: offshore structures, Gaussian processes, Physics informed machine learning, Kernel design

Procedia PDF Downloads 156
18528 Neither ‘Institutional’ nor ‘Remedial’: Court-Ordered Trusts in English and Canadian Private Law

Authors: Adam Reilly

Abstract:

The major claim of this paper is that both the English and Canadian branches of the common law have been ill-served by the 'institutional'/'remedial' taxonomy of constructive trusts; what shall be termed the 'orthodox taxonomy'.  The orthodox taxonomy is found both within the case law and the attendant academic commentary.  In truth, the orthodox taxonomy is especially dangerous because it contains a kernel of truth together with a misconception; the interplay of both has caused more harm than the misconception alone would have managed.  The kernel of truth is that some trusts arise automatically when the necessary facts occur ('institutional') and other trusts arise only by way of court order ('remedial').  The misconception is that these two labels represent an exhaustive nomenclature of two distinct 'kinds' of constructive trust such that any particular constructive trust must necessarily be 'institutional' if it is not 'remedial' and vice versa.  The central difficulty is that our understanding of 'remedial' trusts is relatively poor, with the result that anyone using the orthodox taxonomy shall be led astray in one of three ways: (i) by rejecting it wholesale; (ii) by adopting one ‘type’ of trust to the exclusion of the other (as in English law); or (iii) by applying it as an analytical device with sub-optimal results which are difficult to defend.  This paper shall seek to resolve these difficulties by clarifying the criteria for identifying and distinguishing true 'remedial' constructive trusts.  It shall then provide some working examples of how English and Canadian private law at present misunderstand constructive trusts and how that misunderstanding might be resolved once we distinguish the orthodox taxonomy's kernel of truth from the misconception outlined above.

Keywords: comparative law, constructive trusts, equitable remedies, remedial constructive trusts

Procedia PDF Downloads 115
18527 Paddy/Rice Singulation for Determination of Husking Efficiency and Damage Using Machine Vision

Authors: M. Shaker, S. Minaei, M. H. Khoshtaghaza, A. Banakar, A. Jafari

Abstract:

In this study a system of machine vision and singulation was developed to separate paddy from rice and determine paddy husking and rice breakage percentages. The machine vision system consists of three main components including an imaging chamber, a digital camera, a computer equipped with image processing software. The singulation device consists of a kernel holding surface, a motor with vacuum fan, and a dimmer. For separation of paddy from rice (in the image), it was necessary to set a threshold. Therefore, some images of paddy and rice were sampled and the RGB values of the images were extracted using MATLAB software. Then mean and standard deviation of the data were determined. An Image processing algorithm was developed using MATLAB to determine paddy/rice separation and rice breakage and paddy husking percentages, using blue to red ratio. Tests showed that, a threshold of 0.75 is suitable for separating paddy from rice kernels. Results from the evaluation of the image processing algorithm showed that the accuracies obtained with the algorithm were 98.36% and 91.81% for paddy husking and rice breakage percentage, respectively. Analysis also showed that a suction of 45 mmHg to 50 mmHg yielding 81.3% separation efficiency is appropriate for operation of the kernel singulation system.

Keywords: breakage, computer vision, husking, rice kernel

Procedia PDF Downloads 341
18526 Operational Matrix Method for Fuzzy Fractional Reaction Diffusion Equation

Authors: Sachin Kumar

Abstract:

Fuzzy fractional diffusion equation is widely useful to depict different physical processes arising in physics, biology, and hydrology. The motive of this article is to deal with the fuzzy fractional diffusion equation. We study a mathematical model of fuzzy space-time fractional diffusion equation in which unknown function, coefficients, and initial-boundary conditions are fuzzy numbers. First, we find out a fuzzy operational matrix of Legendre polynomial of Caputo type fuzzy fractional derivative having a non-singular Mittag-Leffler kernel. The main advantages of this method are that it reduces the fuzzy fractional partial differential equation (FFPDE) to a system of fuzzy algebraic equations from which we can find the solution of the problem. The feasibility of our approach is shown by some numerical examples. Hence, our method is suitable to deal with FFPDE and has good accuracy.

Keywords: fractional PDE, fuzzy valued function, diffusion equation, Legendre polynomial, spectral method

Procedia PDF Downloads 160
18525 Hyperspectral Imaging and Nonlinear Fukunaga-Koontz Transform Based Food Inspection

Authors: Hamidullah Binol, Abdullah Bal

Abstract:

Nowadays, food safety is a great public concern; therefore, robust and effective techniques are required for detecting the safety situation of goods. Hyperspectral Imaging (HSI) is an attractive material for researchers to inspect food quality and safety estimation such as meat quality assessment, automated poultry carcass inspection, quality evaluation of fish, bruise detection of apples, quality analysis and grading of citrus fruits, bruise detection of strawberry, visualization of sugar distribution of melons, measuring ripening of tomatoes, defect detection of pickling cucumber, and classification of wheat kernels. HSI can be used to concurrently collect large amounts of spatial and spectral data on the objects being observed. This technique yields with exceptional detection skills, which otherwise cannot be achieved with either imaging or spectroscopy alone. This paper presents a nonlinear technique based on kernel Fukunaga-Koontz transform (KFKT) for detection of fat content in ground meat using HSI. The KFKT which is the nonlinear version of FKT is one of the most effective techniques for solving problems involving two-pattern nature. The conventional FKT method has been improved with kernel machines for increasing the nonlinear discrimination ability and capturing higher order of statistics of data. The proposed approach in this paper aims to segment the fat content of the ground meat by regarding the fat as target class which is tried to be separated from the remaining classes (as clutter). We have applied the KFKT on visible and nearinfrared (VNIR) hyperspectral images of ground meat to determine fat percentage. The experimental studies indicate that the proposed technique produces high detection performance for fat ratio in ground meat.

Keywords: food (ground meat) inspection, Fukunaga-Koontz transform, hyperspectral imaging, kernel methods

Procedia PDF Downloads 403
18524 A Nonlinear Feature Selection Method for Hyperspectral Image Classification

Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo

Abstract:

For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.

Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine

Procedia PDF Downloads 241
18523 On the Fourth-Order Hybrid Beta Polynomial Kernels in Kernel Density Estimation

Authors: Benson Ade Eniola Afere

Abstract:

This paper introduces a family of fourth-order hybrid beta polynomial kernels developed for statistical analysis. The assessment of these kernels' performance centers on two critical metrics: asymptotic mean integrated squared error (AMISE) and kernel efficiency. Through the utilization of both simulated and real-world datasets, a comprehensive evaluation was conducted, facilitating a thorough comparison with conventional fourth-order polynomial kernels. The evaluation procedure encompassed the computation of AMISE and efficiency values for both the proposed hybrid kernels and the established classical kernels. The consistently observed trend was the superior performance of the hybrid kernels when compared to their classical counterparts. This trend persisted across diverse datasets, underscoring the resilience and efficacy of the hybrid approach. By leveraging these performance metrics and conducting evaluations on both simulated and real-world data, this study furnishes compelling evidence in favour of the superiority of the proposed hybrid beta polynomial kernels. The discernible enhancement in performance, as indicated by lower AMISE values and higher efficiency scores, strongly suggests that the proposed kernels offer heightened suitability for statistical analysis tasks when compared to traditional kernels.

Keywords: AMISE, efficiency, fourth-order Kernels, hybrid Kernels, Kernel density estimation

Procedia PDF Downloads 43
18522 Physics-Informed Convolutional Neural Networks for Reservoir Simulation

Authors: Jiangxia Han, Liang Xue, Keda Chen

Abstract:

Despite the significant progress over the last decades in reservoir simulation using numerical discretization, meshing is complex. Moreover, the high degree of freedom of the space-time flow field makes the solution process very time-consuming. Therefore, we present Physics-Informed Convolutional Neural Networks(PICNN) as a hybrid scientific theory and data method for reservoir modeling. Besides labeled data, the model is driven by the scientific theories of the underlying problem, such as governing equations, boundary conditions, and initial conditions. PICNN integrates governing equations and boundary conditions into the network architecture in the form of a customized convolution kernel. The loss function is composed of data matching, initial conditions, and other measurable prior knowledge. By customizing the convolution kernel and minimizing the loss function, the neural network parameters not only fit the data but also honor the governing equation. The PICNN provides a methodology to model and history-match flow and transport problems in porous media. Numerical results demonstrate that the proposed PICNN can provide an accurate physical solution from a limited dataset. We show how this method can be applied in the context of a forward simulation for continuous problems. Furthermore, several complex scenarios are tested, including the existence of data noise, different work schedules, and different good patterns.

Keywords: convolutional neural networks, deep learning, flow and transport in porous media, physics-informed neural networks, reservoir simulation

Procedia PDF Downloads 102
18521 A Semi-Analytical Method for Analysis of the Axially Symmetric Problem on Indentation of a Hot Circular Punch into an Arbitrarily Nonhomogeneous Halfspace

Authors: S. Aizikovich, L. Krenev, Y. Tokovyy, Y. C. Wang

Abstract:

An approximate analytical-numerical solution to the axisymmetric problem on thermo-mechanical indentation of a flat cylindrical punch into an arbitrarily non-homogeneous elastic half-space is constructed by making use of the bilateral asymptotic method. The key point of this method lies in evaluation of the ker¬nels in the obtained integral equations by making use of a numerical technique. Once the structure of the kernel is defined, it then is approximated by an analytical expression of special kind so that the solution of the integral equation can be achieved analytically. This fact allows for construction of the solution in an analytical form, which is convenient for analysis of the mechanical effects concerned with arbitrarily presumed non-homogeneity of the material.

Keywords: contact problem, circular punch, arbitrarily-nonhomogeneous halfspace

Procedia PDF Downloads 495
18520 Existence of Minimal and Maximal Mild Solutions for Non-Local in Time Subdiffusion Equations of Neutral Type

Authors: Jorge Gonzalez-Camus

Abstract:

In this work is proved the existence of at least one minimal and maximal mild solutions to the Cauchy problem, for fractional evolution equation of neutral type, involving a general kernel. An operator A generating a resolvent family and integral resolvent family on a Banach space X and a kernel belonging to a large class appears in the equation, which covers many relevant cases from physics applications, in particular, the important case of time - fractional evolution equations of neutral type. The main tool used in this work was the Kuratowski measure of noncompactness and fixed point theorems, specifically Darbo-type, and an iterative method of lower and upper solutions, based in an order in X induced by a normal cone P. Initially, the equation is a Cauchy problem, involving a fractional derivate in Caputo sense. Then, is formulated the equivalent integral version, and defining a convenient functional, using the theory of resolvent families, and verifying the hypothesis of the fixed point theorem of Darbo type, give us the existence of mild solution for the initial problem. Furthermore, the existence of minimal and maximal mild solutions was proved through in an iterative method of lower and upper solutions, using the Azcoli-Arzela Theorem, and the Gronwall’s inequality. Finally, we recovered the case derivate in Caputo sense.

Keywords: fractional evolution equations, Volterra integral equations, minimal and maximal mild solutions, neutral type equations, non-local in time equations

Procedia PDF Downloads 143
18519 Effect of Hydrocolloid Coatings and Bene Kernel Oil Acrylamide Formation during Potato Deep Frying

Authors: Razieh Niazmand, Dina Sadat Mousavian, Parvin Sharayei

Abstract:

This study investigated the effect of carboxymethyl cellulose (CMC), tragacanth, and saalab hydrocolloids in two concentrations (0.3%, 0.7%) and different frying media, refined canola oil (RCO), RCO + 1% bene kernel oil (BKO), and RCO + 1 mg/l unsaponifiable matter (USM) of BKO on acrylamide formation in fried potato slices. The hydrocolloid coatings significantly reduced acrylamide formation in potatoes fried in all oils. Increasing the hydrocolloid concentration from 0.3% to 0.7% produced no effective inhibition of acrylamide. The 0.7 % CMC solution was identified as the most promising inhibitor of acrylamide formation in RCO oil, with a 62.9% reduction in acrylamide content. The addition of BKO or USM to RCO led to a noticeable reduction in the acrylamide level in fried potato slices. The findings suggest that a 0.7% CMC solution and RCO+USM are promising inhibitors of acrylamide formation in fried potato products.

Keywords: CMC, frying, potato, saalab, tracaganth

Procedia PDF Downloads 264
18518 The Various Forms of a Soft Set and Its Extension in Medical Diagnosis

Authors: Biplab Singha, Mausumi Sen, Nidul Sinha

Abstract:

In order to deal with the impreciseness and uncertainty of a system, D. Molodtsov has introduced the concept of ‘Soft Set’ in the year 1999. Since then, a number of related definitions have been conceptualized. This paper includes a study on various forms of Soft Sets with examples. The paper contains the concepts of domain and co-domain of a soft set, conversion to one-one and onto function, matrix representation of a soft set and its relation with one-one function, upper and lower triangular matrix, transpose and Kernel of a soft set. This paper also gives the idea of the extension of soft sets in medical diagnosis. Here, two soft sets related to disease and symptoms are considered and using AND operation and OR operation, diagnosis of the disease is calculated through appropriate examples.

Keywords: kernel of a soft set, soft set, transpose of a soft set, upper and lower triangular matrix of a soft set

Procedia PDF Downloads 311
18517 Comparison of Receiver Operating Characteristic Curve Smoothing Methods

Authors: D. Sigirli

Abstract:

The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.

Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve

Procedia PDF Downloads 123
18516 Analysis of Factors Affecting the Number of Infant and Maternal Mortality in East Java with Geographically Weighted Bivariate Generalized Poisson Regression Method

Authors: Luh Eka Suryani, Purhadi

Abstract:

Poisson regression is a non-linear regression model with response variable in the form of count data that follows Poisson distribution. Modeling for a pair of count data that show high correlation can be analyzed by Poisson Bivariate Regression. Data, the number of infant mortality and maternal mortality, are count data that can be analyzed by Poisson Bivariate Regression. The Poisson regression assumption is an equidispersion where the mean and variance values are equal. However, the actual count data has a variance value which can be greater or less than the mean value (overdispersion and underdispersion). Violations of this assumption can be overcome by applying Generalized Poisson Regression. Characteristics of each regency can affect the number of cases occurred. This issue can be overcome by spatial analysis called geographically weighted regression. This study analyzes the number of infant mortality and maternal mortality based on conditions in East Java in 2016 using Geographically Weighted Bivariate Generalized Poisson Regression (GWBGPR) method. Modeling is done with adaptive bisquare Kernel weighting which produces 3 regency groups based on infant mortality rate and 5 regency groups based on maternal mortality rate. Variables that significantly influence the number of infant and maternal mortality are the percentages of pregnant women visit health workers at least 4 times during pregnancy, pregnant women get Fe3 tablets, obstetric complication handled, clean household and healthy behavior, and married women with the first marriage age under 18 years.

Keywords: adaptive bisquare kernel, GWBGPR, infant mortality, maternal mortality, overdispersion

Procedia PDF Downloads 131
18515 The Use of Palm Kernel Shell and Ash for Concrete Production

Authors: J. E. Oti, J. M. Kinuthia, R. Robinson, P. Davies

Abstract:

This work reports the potential of using Palm Kernel (PK) ash and shell as a partial substitute for Portland Cement (PC) and coarse aggregate in the development of mortar and concrete. PK ash and shell are agro-waste materials from palm oil mills, the disposal of PK ash and shell is an environmental problem of concern. The PK ash has pozzolanic properties that enables it as a partial replacement for cement and also plays an important role in the strength and durability of concrete, its use in concrete will alleviate the increasing challenges of scarcity and high cost of cement. In order to investigate the PC replacement potential of PK ash, three types of PK ash were produced at varying temperature (350-750 degrees) and they were used to replace up to 50% PC. The PK shell was used to replace up to 100% coarse aggregate in order to study its aggregate replacement potential. The testing programme included material characterisation, the determination of compressive strength, tensile splitting strength and chemical durability in aggressive sulfate-bearing exposure conditions. The 90 day compressive results showed a significant strength gain (up to 26.2 N/mm2). The Portland cement and conventional coarse aggregate has significantly higher influence in the strength gain compared to the equivalent PK ash and PK shell. The chemical durability results demonstrated that after a prolonged period of exposure, significant strength losses in all the concretes were observed. This phenomenon is explained, due to lower change in concrete morphology and inhibition of reaction species and the final disruption of the aggregate cement paste matrix.

Keywords: sustainability, concrete, mortar, palm kernel shell, compressive strength, consistency

Procedia PDF Downloads 364
18514 Spatial Point Process Analysis of Dengue Fever in Tainan, Taiwan

Authors: Ya-Mei Chang

Abstract:

This research is intended to apply spatio-temporal point process methods to the dengue fever data in Tainan. The spatio-temporal intensity function of the dataset is assumed to be separable. The kernel estimation is a widely used approach to estimate intensity functions. The intensity function is very helpful to study the relation of the spatio-temporal point process and some covariates. The covariate effects might be nonlinear. An nonparametric smoothing estimator is used to detect the nonlinearity of the covariate effects. A fitted parametric model could describe the influence of the covariates to the dengue fever. The correlation between the data points is detected by the K-function. The result of this research could provide useful information to help the government or the stakeholders making decisions.

Keywords: dengue fever, spatial point process, kernel estimation, covariate effect

Procedia PDF Downloads 324
18513 A Study to Evaluate Some Physical and Mechanical Properties, Relevant in Estimating Energy Requirements in Grinding the Palm Kernel and Coconut Shells

Authors: Saheed O. Akinwale, Olufemi A. Koya

Abstract:

Based on the need to modify palm kernel shell (PKS) and coconut shell (CNS) for some engineering applications, the study evaluated some physical characteristics and fracture resistance, relevant in estimating energy requirements in comminution of the nutshells. The shells, obtained from local processing mills, were washed, sun-dried and sorted to remove kernels, nuts and other extraneous materials. Experiments were then conducted to determine the thickness, density, moisture content, and hardness of the shells. Fracture resistances were characterised by the average compressive load, stiffness and toughness at bio-yield point of specially prepared section of the shells, under quasi-static compression loading. The densities of the dried PKS at 7.12% and the CNS at 6.47% (wb) moisture contents were 1291.20 and 1247.40 kg/m3, respectively. The corresponding Brinnel Hardness Numbers were 58.40 ± 1.91 and 56.33 ± 4.33. Close shells thickness of both PKS and CNS exhibited identical physical properties although; CNS is relatively larger in physical dimensions than PKS. The findings further showed that both shell types exhibited higher resistance with compression along the longitudinal axes than the transverse axes. With compressions along the longitudinal axes, the fracture force were 1.41 ± 0.11 and 3.62 ± 0.09 kN; bio-stiffness; 934.70 ± 67.03 kN/m and 1980.74 ± 8.92 kN/m; and toughness, 2.17 ± 0.16 and 6.51 ± 0.15 KN mm for the PKS and CNS, respectively. With the estimated toughness of CNS higher than that of PKS, the study showed the requirement of higher comminution energy for CNS.

Keywords: bio-stiffness, coconut shell, comminution, crushing strength, energy requirement, palm kernel shell, toughness

Procedia PDF Downloads 204
18512 The Inclusion of the Cabbage Waste in Buffalo Ration Made of Sugarcane Waste and Its Effect on Characteristics of the Silage

Authors: Adrizal, Irsan Ryanto, Sri Juwita, Adika Sugara, Tino Bapirco

Abstract:

The objective of the research was to study the influence of the inclusion of the cabbage waste into a buffalo rations made of sugarcane waste on the feed formula and characteristic of complete feed silage. Research carried out a two-stage i.e. the feed formulation and experiment of making complete feed silage. Feed formulation is done by linear programming. Data input is the price of feed stuffs and their nutrient contents as well as requirements for rations, while the output is the use of each feed stuff and the price of complete feed. The experiment of complete feed silage was done by a completely random design 4 x 4. The treatments were 4 inclusion levels of the cabbage waste i.e. 0%,(T1) 5%(T2), 10%(T3) and 15% (T4), with 4 replications. The result of feed formulation for T1 was cabbage (0%), sugarcane top (17.9%), bagasse (33.3%), Molasses (5.0%), cabagge (0%), Thitonia sp (10.0%), rice brand (2.7%), palm kernel cake (20.0%), corn meal (9.1%), bond meal (1.5%) and salt (0.5%). The formula of T2 was cabagge (5%), sugarcane top (1.7%), bagasse (45.2%), Molasses (5.0%), , Thitonia sp (10.0%), rice brand (3.6%), palm kernel cake (20.0%), corn meal (7.5%), bond meal (1.5%) and salt (0.5%). The formula of T3 was cabbage (10%), sugarcane top (0%), bagasse (45.3%), Molasses (5.0%), Thitonia sp (10.0%), rice brand (3.8%), palm kernel cake (20.0%), corn meal (3.9%), bond meal (1.5%) and salt(0.5%). The formula of T4 was cabagge (15.0%), sugarcane top (0%), bagasse (44.1%), Molasses (5.0%), Thitonia sp (10.0%), rice brand (3.9%), palm kernel cake (20.0%), corn meal (0%), bond meal (1.5%) and salt (0.5%). An increase in the level of inclusion of the cabbage waste can decrease the cost of rations. The cost of rations (IDR/kg on DM basis) were 1442, 1367, 1333, and 1300 respectively. The rations formula were not significantly (P > 0.05) influent the on fungal colonies, smell, texture and color of the complete ration silage, but the pH increased significantly (P < 0.05). It concluded that inclusion of cabbage waste can minimize the cost of buffalo ration, without decreasing the silage quality of complete feed.

Keywords: buffalo, cabbage, complete feed, sillage characteristic, sugarcane waste

Procedia PDF Downloads 222
18511 Fast and Accurate Finite-Difference Method Solving Multicomponent Smoluchowski Coagulation Equation

Authors: Alexander P. Smirnov, Sergey A. Matveev, Dmitry A. Zheltkov, Eugene E. Tyrtyshnikov

Abstract:

We propose a new computational technique for multidimensional (multicomponent) Smoluchowski coagulation equation. Using low-rank approximations in Tensor Train format of both the solution and the coagulation kernel, we accelerate the classical finite-difference Runge-Kutta scheme keeping its level of accuracy. The complexity of the taken finite-difference scheme is reduced from O(N^2d) to O(d^2 N log N ), where N is the number of grid nodes and d is a dimensionality of the problem. The efficiency and the accuracy of the new method are demonstrated on concrete problem with known analytical solution.

Keywords: tensor train decomposition, multicomponent Smoluchowski equation, runge-kutta scheme, convolution

Procedia PDF Downloads 389
18510 TACTICAL: Ram Image Retrieval in Linux Using Protected Mode Architecture’s Paging Technique

Authors: Sedat Aktas, Egemen Ulusoy, Remzi Yildirim

Abstract:

This article explains how to get a ram image from a computer with a Linux operating system and what steps should be followed while getting it. What we mean by taking a ram image is the process of dumping the physical memory instantly and writing it to a file. This process can be likened to taking a picture of everything in the computer’s memory at that moment. This process is very important for tools that analyze ram images. Volatility can be given as an example because before these tools can analyze ram, images must be taken. These tools are used extensively in the forensic world. Forensic, on the other hand, is a set of processes for digitally examining the information on any computer or server on behalf of official authorities. In this article, the protected mode architecture in the Linux operating system is examined, and the way to save the image sample of the kernel driver and system memory to disk is followed. Tables and access methods to be used in the operating system are examined based on the basic architecture of the operating system, and the most appropriate methods and application methods are transferred to the article. Since there is no article directly related to this study on Linux in the literature, it is aimed to contribute to the literature with this study on obtaining ram images. LIME can be mentioned as a similar tool, but there is no explanation about the memory dumping method of this tool. Considering the frequency of use of these tools, the contribution of the study in the field of forensic medicine has been the main motivation of the study due to the intense studies on ram image in the field of forensics.

Keywords: linux, paging, addressing, ram-image, memory dumping, kernel modules, forensic

Procedia PDF Downloads 78
18509 Bio Ethanol Production From the Co-Mixture of Jatropha Carcus L. Kernel Cake and Rice Straw

Authors: Felix U. Asoiro, Daniel I. Eleazar, Peter O. Offor

Abstract:

As a result of increasing energy demands, research in bioethanol has increased in recent years all through the world, in abide to partially or totally replace renewable energy supplies. The first and third generation feedstocks used for biofuel production have fundamental drawbacks. Waste rice straw and cake from second generation feedstock like Jatropha curcas l. kernel (JC) is seen as non-food feedstock and promising candidates for the industrial production of bioethanol. In this study, JC and rice husk (RH) wastes were characterized for proximate composition. Bioethanol was produced from the residual polysaccharides present in rice husk (RH) and Jatropha seed cake by sequential hydrolytic and fermentative processes at varying mixing proportions (50 g JC/50 g RH, 100 g JC/10 g RH, 100 g JC/20 g RH, 100 g JC/50 g RH, 100 g JC/100 g RH, 100 g JC/200 g RH and 200 g JC/100 g RH) and particle sizes (0.25, 0.5 and 1.00 mm). Mixing proportions and particle size significantly affected both bioethanol yield and some bioethanol properties. Bioethanol yield (%) increased with an increase in particle size. The highest bioethanol (8.67%) was produced at a mixing proportion of 100 g JC/50g RH at 0.25 mm particle size. The bioethanol had the lowest values of specific gravity and density of 1.25 and 0.92 g cm-3 and the highest values of 1.57 and 0.97 g cm-3 respectively. The highest values of viscosity (4.64 cSt) were obtained with 200 g JC/100 g RH, at 1.00 mm particle size. The maximum flash point and cloud point values were 139.9 oC and 23.7oC (100 g JC/200 g RH) at 1 mm and 0.5 mm particle sizes respectively. The maximum pour point value recorded was 3.85oC (100 g JC/50 g RH) at 1 mm particle size. The paper concludes that bioethanol can be recovered from JC and RH wastes. JC and RH blending proportions as well as particle sizes are important factors in bioethanol production.

Keywords: bioethanol, hydrolysis, Jatropha curcas l. kernel, rice husk, fermentation, proximate composition

Procedia PDF Downloads 67
18508 A Hierarchical Method for Multi-Class Probabilistic Classification Vector Machines

Authors: P. Byrnes, F. A. DiazDelaO

Abstract:

The Support Vector Machine (SVM) has become widely recognised as one of the leading algorithms in machine learning for both regression and binary classification. It expresses predictions in terms of a linear combination of kernel functions, referred to as support vectors. Despite its popularity amongst practitioners, SVM has some limitations, with the most significant being the generation of point prediction as opposed to predictive distributions. Stemming from this issue, a probabilistic model namely, Probabilistic Classification Vector Machines (PCVM), has been proposed which respects the original functional form of SVM whilst also providing a predictive distribution. As physical system designs become more complex, an increasing number of classification tasks involving industrial applications consist of more than two classes. Consequently, this research proposes a framework which allows for the extension of PCVM to a multi class setting. Additionally, the original PCVM framework relies on the use of type II maximum likelihood to provide estimates for both the kernel hyperparameters and model evidence. In a high dimensional multi class setting, however, this approach has been shown to be ineffective due to bad scaling as the number of classes increases. Accordingly, we propose the application of Markov Chain Monte Carlo (MCMC) based methods to provide a posterior distribution over both parameters and hyperparameters. The proposed framework will be validated against current multi class classifiers through synthetic and real life implementations.

Keywords: probabilistic classification vector machines, multi class classification, MCMC, support vector machines

Procedia PDF Downloads 201
18507 Processing Methods for Increasing the Yield, Nutritional Value and Stability of Coconut Milk

Authors: Archana G. Lamdande, Shyam R. Garud, K. S. M. S. Raghavarao

Abstract:

Coconut has two edible parts, that is, a white kernel (solid endosperm) and coconut water (liquid endosperm). The white kernel is generally used in fresh or dried form for culinary purposes. Coconut testa, is the brown skin, covering the coconut kernel. It is removed by paring of wet coconut and obtained as a by-product in coconut processing industries during the production of products such as desiccated coconut, coconut milk, whole coconut milk powder and virgin coconut oil. At present, it is used as animal feed component after drying and recovering the residual oil (by expelling). Experiments were carried out on expelling of coconut milk for shredded coconut with and without testa removal, in order to explore the possibility of increasing the milk yield and value addition in terms of increased polyphenol content. The color characteristics of coconut milk obtained from the grating without removal of testa were observed to be L* 82.79, a* 0.0125, b* 6.245, while that obtained from grating with removal of testa were L* 83.24, a* -0.7925, b* 3.1. A significant increase was observed in total phenol content of coconut milk obtained from the grating with testa (833.8 µl/ml) when compared to that from without testa (521.3 µl/ml). However, significant difference was not observed in protein content of coconut milk obtained from the grating with and without testa (4.9 and 5.0% w/w, respectively). Coconut milk obtained from grating without removal of testa showed higher milk yield (62% w/w) when compared to that obtained from grating with removal of testa (60% w/w). The fat content in coconut milk was observed to be 32% (w/w), and it is unstable due to such a high fat content. Therefore, several experiments were carried out for examining its stability by adjusting the fat content at different levels (32, 28, 24, and 20% w/w). It was found that the coconut milk was more stable with a fat content of 24 % (w/w). Homogenization and ultrasonication and their combinations were used for exploring the possibility of increasing the stability of coconut milk. The microscopic study was carried out for analyzing the size of fat globules and the degree of their uniform distribution.

Keywords: coconut milk, homogenization, stability, testa, ultrasonication

Procedia PDF Downloads 277
18506 Reducing the Cooking Time of Bambara Groundnut (BGN)

Authors: Auswell Amfo-Antiri, Esther Eshun, Theresa A. Amu

Abstract:

Cooking Bambara groundnut (Bambara beans) is time and energy-consuming. Over time, some substances have been used to help reduce cooking time and save energy. This experimental study was carried out to find ways of reducing the cooking time of Bambara groundnut using selected organic substances. Twenty grams (20g) each of fresh pawpaw leaves, guava leaves, ginger, onion, and palm kernel were cooked with five samples of 200g of the creamy variety of raw Bambara groundnut. A control was cooked without any organic substance added. All six samples were cooked with equal quantities of water (4L); the gas mark used for cooking the samples was marked 5, the highest for the largest burner, using the same cooking pot. Gas matter. The control sample used 192 minutes to cook thoroughly. The ginger-treated sample (AET02) had the shortest cooking time of 145 minutes, followed by the onion-treated sample (AET05), with a cooking time of 157 minutes. The sample cooked with Palm kernel (AET06) and Pawpaw (AET04) used 172 minutes and 174 minutes, respectively, while sample AET03, cooked with Guava, used 185 minutes for cooking. The difference in cooking time for the sample treated with ginger (AET02) and onion (AET05) was 47 minutes and 35 minutes, respectively, as compared with the control. The comparison between Control and Pawpaw produced [p=0.163>0.05]; Control and Ginger yielded [p=0.006<0.05]; Control and Kernel resulted in [p=0.128>0.05]; Control and Guava resulted in [p=0.560>0.05]. The study concluded that ginger and onions comparatively reduced the cooking time for Bambara ground nut appreciably. The study recommended that ginger and onions could be used to reduce the cooking time of Bambara groundnut.

Keywords: cooking time, organic substances, ginger, onions, pawpaw leaves, guava leaves, bambara groundnut

Procedia PDF Downloads 51
18505 Occurrence and Levels of Mycotoxins in On-Farm Stored Sesame in Major-Growing Districts of Ethiopia

Authors: S. Alemayehu, F. A. Abera, K. M. Ayimut, R. Mahroof, J. Harvey, B. Subramanyam

Abstract:

The occurrence of mycotoxins in sesame seeds poses a significant threat to food safety and the economy in Ethiopia. This study aimed to determine the levels and occurrence of mycotoxins in on-farm stored sesame seeds in major-growing districts of Ethiopia. A total of 470 sesame seed samples were collected from randomly selected farmers' storage structures in five major-growing districts using purposive sampling techniques. An enzyme-linked immunosorbent assay (ELISA) was used to analyze the collected samples for the presence of four mycotoxins: total aflatoxins (AFT), ochratoxin A (OTA), total fumonisins (FUM), and deoxynivalenol (DON). The study found that all samples contained varying levels of mycotoxins, with AFT and DON being the most prevalent. AFT concentrations in detected samples ranged from 2.5 to 27.8 parts per billion (ppb), with a mean concentration of 13.8 ppb. OTA levels ranged from 5.0 ppb to 9.7 ppb, with a mean level of 7.1 ppb. Total fumonisin concentrations ranged from 300 to 1300 ppb in all samples, with a mean of 800 ppb. DON concentrations ranged from 560 to 700 ppb in the analyzed samples. The majority (96.8%) of the samples were safe from AFT, FUM, and DON mean levels when compared to the Federal Drug Administration maximum limit. AFT-OTA, DON-OTA, AFT-FUM, FUM-DON, and FUM-OTA, respectively, had co-occurrence rates of 44.0, 38.3, 33.8, 30.2, 29.8 and 26.0% for mycotoxins. On average, 37.2% of the sesame samples had fungal infection, and seed germination rates ranged from 66.8% to 91.1%. The Limmu district had higher levels of total aflatoxins, kernel infection, and lower germination rates than other districts. The Wollega variety of sesame had higher kernel infection, total aflatoxins concentration, and lower germination rates than other varieties. Grain age had a statistically significant (p<0.05) effect on both kernel infection and germination. The storage methods used for sesame in major-growing districts of Ethiopia favor mycotoxin-producing fungi. As the levels of mycotoxins in sesame are of public health significance, stakeholders should come together to identify secure and suitable storage technologies to maintain the quantity and quality of sesame at the level of smallholder farmers. This study suggests the need for suitable storage technologies to maintain the quality of sesame and reduce the risk of mycotoxin contamination.

Keywords: districts, seed germination, kernel infection, moisture content, relative humidity, temperature

Procedia PDF Downloads 79
18504 Low-Cost Embedded Biometric System Based on Fingervein Modality

Authors: Randa Boukhris, Alima Damak, Dorra Sellami

Abstract:

Fingervein biometric authentication is one of the most popular and accurate technologies. However, low cost embedded solution is still an open problem. In this paper, a real-time implementation of fingervein recognition process embedded in Raspberry-Pi has been proposed. The use of Raspberry-Pi reduces overall system cost and size while allowing an easy user interface. Implementation of a target technology has guided to opt some specific parallel and simple processing algorithms. In the proposed system, we use four structural directional kernel elements for filtering finger vein images. Then, a Top-Hat and Bottom-Hat kernel filters are used to enhance the visibility and the appearance of venous images. For feature extraction step, a simple Local Directional Code (LDC) descriptor is applied. The proposed system presents an Error Equal Rate (EER) and Identification Rate (IR), respectively, equal to 0.02 and 98%. Furthermore, experimental results show that real-time operations have good performance.

Keywords: biometric, Bottom-Hat, Fingervein, LDC, Rasberry-Pi, ROI, Top-Hat

Procedia PDF Downloads 180
18503 Nonparametric Copula Approximations

Authors: Serge Provost, Yishan Zang

Abstract:

Copulas are currently utilized in finance, reliability theory, machine learning, signal processing, geodesy, hydrology and biostatistics, among several other fields of scientific investigation. It follows from Sklar's theorem that the joint distribution function of a multidimensional random vector can be expressed in terms of its associated copula and marginals. Since marginal distributions can easily be determined by making use of a variety of techniques, we address the problem of securing the distribution of the copula. This will be done by using several approaches. For example, we will obtain bivariate least-squares approximations of the empirical copulas, modify the kernel density estimation technique and propose a criterion for selecting appropriate bandwidths, differentiate linearized empirical copulas, secure Bernstein polynomial approximations of suitable degrees, and apply a corollary to Sklar's result. Illustrative examples involving actual observations will be presented. The proposed methodologies will as well be applied to a sample generated from a known copula distribution in order to validate their effectiveness.

Keywords: copulas, Bernstein polynomial approximation, least-squares polynomial approximation, kernel density estimation, density approximation

Procedia PDF Downloads 43
18502 The Journey of a Malicious HTTP Request

Authors: M. Mansouri, P. Jaklitsch, E. Teiniker

Abstract:

SQL injection on web applications is a very popular kind of attack. There are mechanisms such as intrusion detection systems in order to detect this attack. These strategies often rely on techniques implemented at high layers of the application but do not consider the low level of system calls. The problem of only considering the high level perspective is that an attacker can circumvent the detection tools using certain techniques such as URL encoding. One technique currently used for detecting low-level attacks on privileged processes is the tracing of system calls. System calls act as a single gate to the Operating System (OS) kernel; they allow catching the critical data at an appropriate level of detail. Our basic assumption is that any type of application, be it a system service, utility program or Web application, “speaks” the language of system calls when having a conversation with the OS kernel. At this level we can see the actual attack while it is happening. We conduct an experiment in order to demonstrate the suitability of system call analysis for detecting SQL injection. We are able to detect the attack. Therefore we conclude that system calls are not only powerful in detecting low-level attacks but that they also enable us to detect high-level attacks such as SQL injection.

Keywords: Linux system calls, web attack detection, interception, SQL

Procedia PDF Downloads 324
18501 Image Segmentation Using Active Contours Based on Anisotropic Diffusion

Authors: Shafiullah Soomro

Abstract:

Active contour is one of the image segmentation techniques and its goal is to capture required object boundaries within an image. In this paper, we propose a novel image segmentation method by using an active contour method based on anisotropic diffusion feature enhancement technique. The traditional active contour methods use only pixel information to perform segmentation, which produces inaccurate results when an image has some noise or complex background. We use Perona and Malik diffusion scheme for feature enhancement, which sharpens the object boundaries and blurs the background variations. Our main contribution is the formulation of a new SPF (signed pressure force) function, which uses global intensity information across the regions. By minimizing an energy function using partial differential framework the proposed method captures semantically meaningful boundaries instead of catching uninterested regions. Finally, we use a Gaussian kernel which eliminates the problem of reinitialization in level set function. We use several synthetic and real images from different modalities to validate the performance of the proposed method. In the experimental section, we have found the proposed method performance is better qualitatively and quantitatively and yield results with higher accuracy compared to other state-of-the-art methods.

Keywords: active contours, anisotropic diffusion, level-set, partial differential equations

Procedia PDF Downloads 142
18500 A New Framework for ECG Signal Modeling and Compression Based on Compressed Sensing Theory

Authors: Siavash Eftekharifar, Tohid Yousefi Rezaii, Mahdi Shamsi

Abstract:

The purpose of this paper is to exploit compressed sensing (CS) method in order to model and compress the electrocardiogram (ECG) signals at a high compression ratio. In order to obtain a sparse representation of the ECG signals, first a suitable basis matrix with Gaussian kernels, which are shown to nicely fit the ECG signals, is constructed. Then the sparse model is extracted by applying some optimization technique. Finally, the CS theory is utilized to obtain a compressed version of the sparse signal. Reconstruction of the ECG signal from the compressed version is also done to prove the reliability of the algorithm. At this stage, a greedy optimization technique is used to reconstruct the ECG signal and the Mean Square Error (MSE) is calculated to evaluate the precision of the proposed compression method.

Keywords: compressed sensing, ECG compression, Gaussian kernel, sparse representation

Procedia PDF Downloads 432