Search results for: Linux Kernel
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 285

Search results for: Linux Kernel

195 General Purpose Graphic Processing Units Based Real Time Video Tracking System

Authors: Mallikarjuna Rao Gundavarapu, Ch. Mallikarjuna Rao, K. Anuradha Bai

Abstract:

Real Time Video Tracking is a challenging task for computing professionals. The performance of video tracking techniques is greatly affected by background detection and elimination process. Local regions of the image frame contain vital information of background and foreground. However, pixel-level processing of local regions consumes a good amount of computational time and memory space by traditional approaches. In our approach we have explored the concurrent computational ability of General Purpose Graphic Processing Units (GPGPU) to address this problem. The Gaussian Mixture Model (GMM) with adaptive weighted kernels is used for detecting the background. The weights of the kernel are influenced by local regions and are updated by inter-frame variations of these corresponding regions. The proposed system has been tested with GPU devices such as GeForce GTX 280, GeForce GTX 280 and Quadro K2000. The results are encouraging with maximum speed up 10X compared to sequential approach.

Keywords: connected components, embrace threads, local weighted kernel, structuring elements

Procedia PDF Downloads 442
194 Hyperspectral Imaging and Nonlinear Fukunaga-Koontz Transform Based Food Inspection

Authors: Hamidullah Binol, Abdullah Bal

Abstract:

Nowadays, food safety is a great public concern; therefore, robust and effective techniques are required for detecting the safety situation of goods. Hyperspectral Imaging (HSI) is an attractive material for researchers to inspect food quality and safety estimation such as meat quality assessment, automated poultry carcass inspection, quality evaluation of fish, bruise detection of apples, quality analysis and grading of citrus fruits, bruise detection of strawberry, visualization of sugar distribution of melons, measuring ripening of tomatoes, defect detection of pickling cucumber, and classification of wheat kernels. HSI can be used to concurrently collect large amounts of spatial and spectral data on the objects being observed. This technique yields with exceptional detection skills, which otherwise cannot be achieved with either imaging or spectroscopy alone. This paper presents a nonlinear technique based on kernel Fukunaga-Koontz transform (KFKT) for detection of fat content in ground meat using HSI. The KFKT which is the nonlinear version of FKT is one of the most effective techniques for solving problems involving two-pattern nature. The conventional FKT method has been improved with kernel machines for increasing the nonlinear discrimination ability and capturing higher order of statistics of data. The proposed approach in this paper aims to segment the fat content of the ground meat by regarding the fat as target class which is tried to be separated from the remaining classes (as clutter). We have applied the KFKT on visible and nearinfrared (VNIR) hyperspectral images of ground meat to determine fat percentage. The experimental studies indicate that the proposed technique produces high detection performance for fat ratio in ground meat.

Keywords: food (ground meat) inspection, Fukunaga-Koontz transform, hyperspectral imaging, kernel methods

Procedia PDF Downloads 433
193 Median-Based Nonparametric Estimation of Returns in Mean-Downside Risk Portfolio Frontier

Authors: H. Ben Salah, A. Gannoun, C. de Peretti, A. Trabelsi

Abstract:

The Downside Risk (DSR) model for portfolio optimisation allows to overcome the drawbacks of the classical mean-variance model concerning the asymetry of returns and the risk perception of investors. This model optimization deals with a positive definite matrix that is endogenous with respect to portfolio weights. This aspect makes the problem far more difficult to handle. For this purpose, Athayde (2001) developped a new recurcive minimization procedure that ensures the convergence to the solution. However, when a finite number of observations is available, the portfolio frontier presents an appearance which is not very smooth. In order to overcome that, Athayde (2003) proposed a mean kernel estimation of the returns, so as to create a smoother portfolio frontier. This technique provides an effect similar to the case in which we had continuous observations. In this paper, taking advantage on the the robustness of the median, we replace the mean estimator in Athayde's model by a nonparametric median estimator of the returns. Then, we give a new version of the former algorithm (of Athayde (2001, 2003)). We eventually analyse the properties of this improved portfolio frontier and apply this new method on real examples.

Keywords: Downside Risk, Kernel Method, Median, Nonparametric Estimation, Semivariance

Procedia PDF Downloads 493
192 Physically Informed Kernels for Wave Loading Prediction

Authors: Daniel James Pitchforth, Timothy James Rogers, Ulf Tyge Tygesen, Elizabeth Jane Cross

Abstract:

Wave loading is a primary cause of fatigue within offshore structures and its quantification presents a challenging and important subtask within the SHM framework. The accurate representation of physics in such environments is difficult, however, driving the development of data-driven techniques in recent years. Within many industrial applications, empirical laws remain the preferred method of wave loading prediction due to their low computational cost and ease of implementation. This paper aims to develop an approach that combines data-driven Gaussian process models with physical empirical solutions for wave loading, including Morison’s Equation. The aim here is to incorporate physics directly into the covariance function (kernel) of the Gaussian process, enforcing derived behaviors whilst still allowing enough flexibility to account for phenomena such as vortex shedding, which may not be represented within the empirical laws. The combined approach has a number of advantages, including improved performance over either component used independently and interpretable hyperparameters.

Keywords: offshore structures, Gaussian processes, Physics informed machine learning, Kernel design

Procedia PDF Downloads 195
191 Home Range and Spatial Interaction Modelling of Black Bears

Authors: Fekadu L. Bayisa, Elvan Ceyhan, Todd D. Steury

Abstract:

Interaction between individuals within the same species is an important component of population dynamics. An interaction can be either static (based on spatial overlap) or dynamic (based on movement interactions). Using GPS collar data, we can quantify both static and dynamic interactions between black bears. The goal of this work is to determine the level of black bear interactions using the 95% and 50% home ranges, as well as to model black bear spatial interactions, which could be attraction, avoidance/repulsion, or a lack of interaction at all, to gain new insights and improve our understanding of ecological processes. Recent methodological developments in home range estimation, inhomogeneous multitype/cross-type summary statistics, and envelope testing methods are explored to study the nature of black bear interactions. Our findings, in general, indicate that the black bears of one type in our data set tend to cluster around another type.

Keywords: autocorrelated kernel density estimator, cross-type summary function, inhomogeneous multitype Poisson process, kernel density estimator, minimum convex polygon, pointwise and global envelope tests

Procedia PDF Downloads 82
190 Comparison of Receiver Operating Characteristic Curve Smoothing Methods

Authors: D. Sigirli

Abstract:

The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.

Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve

Procedia PDF Downloads 152
189 Stability and Rheology of Sodium Diclofenac-Loaded and Unloaded Palm Kernel Oil Esters Nanoemulsion Systems

Authors: Malahat Rezaee, Mahiran Basri, Raja Noor Zaliha Raja Abdul Rahman, Abu Bakar Salleh

Abstract:

Sodium diclofenac is one of the most commonly used drugs of nonsteroidal anti-inflammatory drugs (NSAIDs). It is especially effective in the controlling the severe conditions of inflammation and pain, musculoskeletal disorders, arthritis, and dysmenorrhea. Formulation as nanoemulsions is one of the nanoscience approaches that have been progressively considered in pharmaceutical science for transdermal delivery of drug. Nanoemulsions are a type of emulsion with particle sizes ranging from 20 nm to 200 nm. An emulsion is formed by the dispersion of one liquid, usually the oil phase in another immiscible liquid, water phase that is stabilized using surfactant. Palm kernel oil esters (PKOEs), in comparison to other oils; contain higher amounts of shorter chain esters, which suitable to be applied in micro and nanoemulsion systems as a carrier for actives, with excellent wetting behavior without the oily feeling. This research was aimed to study the effect of O/S ratio on stability and rheological behavior of sodium diclofenac loaded and unloaded palm kernel oil esters nanoemulsion systems. The effect of different O/S ratio of 0.25, 0.50, 0.75, 1.00 and 1.25 on stability of the drug-loaded and unloaded nanoemulsion formulations was evaluated by centrifugation, freeze-thaw cycle and storage stability tests. Lecithin and cremophor EL were used as surfactant. The stability of the prepared nanoemulsion formulations was assessed based on the change in zeta potential and droplet size as a function of time. Instability mechanisms including coalescence and Ostwald ripening for the nanoemulsion system were discussed. In comparison between drug-loaded and unloaded nanoemulsion formulations, drug-loaded formulations represented smaller particle size and higher stability. In addition, the O/S ratio of 0.5 was found to be the best ratio of oil and surfactant for production of a nanoemulsion with the highest stability. The effect of O/S ratio on rheological properties of drug-loaded and unloaded nanoemulsion systems was studied by plotting the flow curves of shear stress (τ) and viscosity (η) as a function of shear rate (γ). The data were fitted to the Power Law model. The results showed that all nanoemulsion formulations exhibited non-Newtonian flow behaviour by displaying shear thinning behaviour. Viscosity and yield stress were also evaluated. The nanoemulsion formulation with the O/S ratio of 0.5 represented higher viscosity and K values. In addition, the sodium diclofenac loaded formulations had more viscosity and higher yield stress than drug-unloaded formulations.

Keywords: nanoemulsions, palm kernel oil esters, sodium diclofenac, rheoligy, stability

Procedia PDF Downloads 424
188 Support Vector Machine Based Retinal Therapeutic for Glaucoma Using Machine Learning Algorithm

Authors: P. S. Jagadeesh Kumar, Mingmin Pan, Yang Yung, Tracy Lin Huan

Abstract:

Glaucoma is a group of visual maladies represented by the scheduled optic nerve neuropathy; means to the increasing dwindling in vision ground, resulting in loss of sight. In this paper, a novel support vector machine based retinal therapeutic for glaucoma using machine learning algorithm is conservative. The algorithm has fitting pragmatism; subsequently sustained on correlation clustering mode, it visualizes perfect computations in the multi-dimensional space. Support vector clustering turns out to be comparable to the scale-space advance that investigates the cluster organization by means of a kernel density estimation of the likelihood distribution, where cluster midpoints are idiosyncratic by the neighborhood maxima of the concreteness. The predicted planning has 91% attainment rate on data set deterrent on a consolidation of 500 realistic images of resolute and glaucoma retina; therefore, the computational benefit of depending on the cluster overlapping system pedestal on machine learning algorithm has complete performance in glaucoma therapeutic.

Keywords: machine learning algorithm, correlation clustering mode, cluster overlapping system, glaucoma, kernel density estimation, retinal therapeutic

Procedia PDF Downloads 255
187 Characterising Stable Model by Extended Labelled Dependency Graph

Authors: Asraful Islam

Abstract:

Extended dependency graph (EDG) is a state-of-the-art isomorphic graph to represent normal logic programs (NLPs) that can characterize the consistency of NLPs by graph analysis. To construct the vertices and arcs of an EDG, additional renaming atoms and rules besides those the given program provides are used, resulting in higher space complexity compared to the corresponding traditional dependency graph (TDG). In this article, we propose an extended labeled dependency graph (ELDG) to represent an NLP that shares an equal number of nodes and arcs with TDG and prove that it is isomorphic to the domain program. The number of nodes and arcs used in the underlying dependency graphs are formulated to compare the space complexity. Results show that ELDG uses less memory to store nodes, arcs, and cycles compared to EDG. To exhibit the desirability of ELDG, firstly, the stable models of the kernel form of NLP are characterized by the admissible coloring of ELDG; secondly, a relation of the stable models of a kernel program with the handles of the minimal, odd cycles appearing in the corresponding ELDG has been established; thirdly, to our best knowledge, for the first time an inverse transformation from a dependency graph to the representing NLP w.r.t. ELDG has been defined that enables transferring analytical results from the graph to the program straightforwardly.

Keywords: normal logic program, isomorphism of graph, extended labelled dependency graph, inverse graph transforma-tion, graph colouring

Procedia PDF Downloads 215
186 A Bottleneck-Aware Power Management Scheme in Heterogeneous Processors for Web Apps

Authors: Inyoung Park, Youngjoo Woo, Euiseong Seo

Abstract:

With the advent of WebGL, Web apps are now able to provide high quality graphics by utilizing the underlying graphic processing units (GPUs). Despite that the Web apps are becoming common and popular, the current power management schemes, which were devised for the conventional native applications, are suboptimal for Web apps because of the additional layer, the Web browser, between OS and application. The Web browser running on a CPU issues GL commands, which are for rendering images to be displayed by the Web app currently running, to the GPU and the GPU processes them. The size and number of issued GL commands determine the processing load of the GPU. While the GPU is processing the GL commands, CPU simultaneously executes the other compute intensive threads. The actual user experience will be determined by either CPU processing or GPU processing depending on which of the two is the more demanded resource. For example, when the GPU work queue is saturated by the outstanding commands, lowering the performance level of the CPU does not affect the user experience because it is already deteriorated by the retarded execution of GPU commands. Consequently, it would be desirable to lower CPU or GPU performance level to save energy when the other resource is saturated and becomes a bottleneck in the execution flow. Based on this observation, we propose a power management scheme that is specialized for the Web app runtime environment. This approach incurs two technical challenges; identification of the bottleneck resource and determination of the appropriate performance level for unsaturated resource. The proposed power management scheme uses the CPU utilization level of the Window Manager to tell which one is the bottleneck if exists. The Window Manager draws the final screen using the processed results delivered from the GPU. Thus, the Window Manager is on the critical path that determines the quality of user experience and purely executed by the CPU. The proposed scheme uses the weighted average of the Window Manager utilization to prevent excessive sensitivity and fluctuation. We classified Web apps into three categories using the analysis results that measure frame-per-second (FPS) changes under diverse CPU/GPU clock combinations. The results showed that the capability of the CPU decides user experience when the Window Manager utilization is above 90% and consequently, the proposed scheme decreases the performance level of CPU by one step. On the contrary, when its utilization is less than 60%, the bottleneck usually lies in the GPU and it is desirable to decrease the performance of GPU. Even the processing unit that is not on critical path, excessive performance drop can occur and that may adversely affect the user experience. Therefore, our scheme lowers the frequency gradually, until it finds an appropriate level by periodically checking the CPU utilization. The proposed scheme reduced the energy consumption by 10.34% on average in comparison to the conventional Linux kernel, and it worsened their FPS by 1.07% only on average.

Keywords: interactive applications, power management, QoS, Web apps, WebGL

Procedia PDF Downloads 193
185 The Reproducibility and Repeatability of Modified Likelihood Ratio for Forensics Handwriting Examination

Authors: O. Abiodun Adeyinka, B. Adeyemo Adesesan

Abstract:

The forensic use of handwriting depends on the analysis, comparison, and evaluation decisions made by forensic document examiners. When using biometric technology in forensic applications, it is necessary to compute Likelihood Ratio (LR) for quantifying strength of evidence under two competing hypotheses, namely the prosecution and the defense hypotheses wherein a set of assumptions and methods for a given data set will be made. It is therefore important to know how repeatable and reproducible our estimated LR is. This paper evaluated the accuracy and reproducibility of examiners' decisions. Confidence interval for the estimated LR were presented so as not get an incorrect estimate that will be used to deliver wrong judgment in the court of Law. The estimate of LR is fundamentally a Bayesian concept and we used two LR estimators, namely Logistic Regression (LoR) and Kernel Density Estimator (KDE) for this paper. The repeatability evaluation was carried out by retesting the initial experiment after an interval of six months to observe whether examiners would repeat their decisions for the estimated LR. The experimental results, which are based on handwriting dataset, show that LR has different confidence intervals which therefore implies that LR cannot be estimated with the same certainty everywhere. Though the LoR performed better than the KDE when tested using the same dataset, the two LR estimators investigated showed a consistent region in which LR value can be estimated confidently. These two findings advance our understanding of LR when used in computing the strength of evidence in handwriting using forensics.

Keywords: confidence interval, handwriting, kernel density estimator, KDE, logistic regression LoR, repeatability, reproducibility

Procedia PDF Downloads 126
184 Particle Filter Supported with the Neural Network for Aircraft Tracking Based on Kernel and Active Contour

Authors: Mohammad Izadkhah, Mojtaba Hoseini, Alireza Khalili Tehrani

Abstract:

In this paper we presented a new method for tracking flying targets in color video sequences based on contour and kernel. The aim of this work is to overcome the problem of losing target in changing light, large displacement, changing speed, and occlusion. The proposed method is made in three steps, estimate the target location by particle filter, segmentation target region using neural network and find the exact contours by greedy snake algorithm. In the proposed method we have used both region and contour information to create target candidate model and this model is dynamically updated during tracking. To avoid the accumulation of errors when updating, target region given to a perceptron neural network to separate the target from background. Then its output used for exact calculation of size and center of the target. Also it is used as the initial contour for the greedy snake algorithm to find the exact target's edge. The proposed algorithm has been tested on a database which contains a lot of challenges such as high speed and agility of aircrafts, background clutter, occlusions, camera movement, and so on. The experimental results show that the use of neural network increases the accuracy of tracking and segmentation.

Keywords: video tracking, particle filter, greedy snake, neural network

Procedia PDF Downloads 343
183 1H-NMR Spectra of Diesel-Biodiesel Blends to Evaluate the Quality and Determine the Adulteration of Biodiesel with Vegetable Oil

Authors: Luis F. Bianchessi, Gustavo G. Shimamoto, Matthieu Tubino

Abstract:

The use of biodiesel has been diffused in Brazil and all over the world by the trading of biodiesel (B100). In Brazil, the diesel oil currently being sold is a blend, containing 7% biodiesel (B7). In this context, it is necessary to develop methods capable of identifying this blend composition, especially regarding the biodiesel quality used for making these blends. In this study, hydrogen nuclear magnetic resonance spectra (1H-NMR) are proposed as a form of identifying and confirming the quality of type B10 blends (10% of biodiesel and 90% of diesel). Furthermore, the presence of vegetable oils, which may be from fuel adulteration or as an evidence of low degree of transesterification conversion during the synthesis of B100, may also be identified. Mixtures of diesel, vegetable oils and their respective biodiesel were prepared. Soybean oil and macauba kernel oil were used as raw material. The diesel proportion remained fixed at 90%. The other proportion (10%) was varied in terms of vegetable oil and biodiesel. The 1H-NMR spectra were obtained for each one of the mixtures, in order to find a correlation between the spectra and the amount of biodiesel, as well as the amount of residual vegetable oil. The ratio of the integral of the methylenic hydrogen H-2 of glycerol (exclusive of vegetable oil) with respect to the integral of the olefinic hydrogens (present in vegetable oil and biodiesel) was obtained. These ratios were correlated with the percentage of vegetable oil in each mixture, from 0% to 10%. The obtained correlation could be described by linear relationships with R2 of 0.9929 for soybean biodiesel and 0.9982 for macauba kernel biodiesel. Preliminary results show that the technique can be used to monitor the biodiesel quality in commercial diesel-biodiesel blends, besides indicating possible adulteration.

Keywords: biodiesel, diesel, biodiesel quality, adulteration

Procedia PDF Downloads 624
182 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information

Authors: Haifeng Wang, Haili Zhang

Abstract:

Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.

Keywords: computational social science, movie preference, machine learning, SVM

Procedia PDF Downloads 260
181 Food Processing Technology and Packaging: A Case Study of Indian Cashew-Nut Industry

Authors: Parashram Jakappa Patil

Abstract:

India is the global leader in world cashew business and cashew-nut industry is one of the important food processing industries in world. However India is the largest producer, processor, exporter and importer eschew in the world. India is providing cashew to the rest of the world. India is meeting world demand of cashew. India has a tremendous potential of cashew production and export to other countries. Every year India earns more than 2000 cores rupees through cashew trade. Cashew industry is one of the important small scale industries in the country which is playing significant role in rural development. It is generating more than 400000 jobs at remote area and 95% cashew worker are women, it is giving income to poor cashew farmers, majority cashew processing units are small and cottage, it is helping to stop migration from young farmers for employment opportunities, it is motivation rural entrepreneurship development and it is also helping to environment protection etc. Hence India cashew business is very important agribusiness in India which has potential make inclusive development. World Bank and IMF recognized cashew-nut industry is one the important tool for poverty eradication at global level. It shows important of cashew business and its strong existence in India. In spite of such huge potential cashew processing industry is facing different problems such as lack of infrastructure ability, lack of supply of raw cashew, lack of availability of finance, collection of raw cashew, unavailability of warehouse, marketing of cashew kernels, lack of technical knowledge and especially processing technology and packaging of finished products. This industry has great prospects such as scope for more cashew cultivation and cashew production, employment generation, formation of cashew processing units, alcohols production from cashew apple, shield oil production, rural development, poverty elimination, development of social and economic backward class and environment protection etc. This industry has domestic as well as foreign market; India has tremendous potential in this regard. The cashew is a poor men’s crop but rich men’s food. The cashew is a source of income and livelihood for poor farmers. Cashew-nut industry may play very important role in the development of hilly region. The objectives of this paper are to identify problems of cashew processing and use of processing technology, problems of cashew kernel packaging, evolving of cashew processing technology over the year and its impact on final product and impact of good processing by adopting appropriate technology packaging on international trade of cashew-nut. The most important problem of cashew processing industry is that is processing and packaging. Bad processing reduce the quality of cashew kernel at large extent especially broken of cashew kernel which has very less price in market compare to whole cashew kernel and not eligible for export. On the other hand if there is no good packaging of cashew kernel will get moisture which destroy test of it. International trade of cashew-nut is depend of two things one is cashew processing and other is packaging. This study has strong relevance because cashew-nut industry is the labour oriented, where processing technology is not playing important role because 95% processing work is manual. Hence processing work was depending on physical performance of worker which makes presence of large workforce inevitable. There are many cashew processing units closed because they are not getting sufficient work force. However due to advancement in technology slowly this picture is changing and processing work get improve. Therefore it is interesting to explore all the aspects in context of cashew processing and packaging of cashew business.

Keywords: cashew, processing technology, packaging, international trade, change

Procedia PDF Downloads 422
180 Estimation of a Finite Population Mean under Random Non Response Using Improved Nadaraya and Watson Kernel Weights

Authors: Nelson Bii, Christopher Ouma, John Odhiambo

Abstract:

Non-response is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random non-response using auxiliary data. In this study, it is assumed that random non-response occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random non-response. In particular, the auxiliary information is used via an improved Nadaraya-Watson kernel regression technique to compensate for random non-response. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at a 95% coverage rate. The results obtained in this study are useful, for instance, in choosing efficient estimators of the finite population mean in demographic sample surveys.

Keywords: mean squared error, random non-response, two-stage cluster sampling, confidence interval lengths

Procedia PDF Downloads 141
179 A Study on the Performance of 2-PC-D Classification Model

Authors: Nurul Aini Abdul Wahab, Nor Syamim Halidin, Sayidatina Aisah Masnan, Nur Izzati Romli

Abstract:

There are many applications of principle component method for reducing the large set of variables in various fields. Fisher’s Discriminant function is also a popular tool for classification. In this research, the researcher focuses on studying the performance of Principle Component-Fisher’s Discriminant function in helping to classify rice kernels to their defined classes. The data were collected on the smells or odour of the rice kernel using odour-detection sensor, Cyranose. 32 variables were captured by this electronic nose (e-nose). The objective of this research is to measure how well a combination model, between principle component and linear discriminant, to be as a classification model. Principle component method was used to reduce all 32 variables to a smaller and manageable set of components. Then, the reduced components were used to develop the Fisher’s Discriminant function. In this research, there are 4 defined classes of rice kernel which are Aromatic, Brown, Ordinary and Others. Based on the output from principle component method, the 32 variables were reduced to only 2 components. Based on the output of classification table from the discriminant analysis, 40.76% from the total observations were correctly classified into their classes by the PC-Discriminant function. Indirectly, it gives an idea that the classification model developed has committed to more than 50% of misclassifying the observations. As a conclusion, the Fisher’s Discriminant function that was built on a 2-component from PCA (2-PC-D) is not satisfying to classify the rice kernels into its defined classes.

Keywords: classification model, discriminant function, principle component analysis, variable reduction

Procedia PDF Downloads 333
178 An Assessment of Health Hazards in Urban Communities: A Study of Spatial-Temporal Variations of Dengue Epidemic in Colombo, Sri Lanka

Authors: U. Thisara G. Perera, C. M. Kanchana N. K. Chandrasekara

Abstract:

Dengue is an epidemic which is spread by Aedes Egyptai and Aedes Albopictus mosquitoes. The cases of dengue show a dramatic growth rate of the epidemic in urban and semi urban areas spatially in tropical and sub-tropical regions of the world. Incidence of dengue has become a prominent reason for hospitalization and deaths in Asian countries, including Sri Lanka. During the last decade the dengue epidemic began to spread from urban to semi-urban and then to rural settings of the country. The highest number of dengue infected patients was recorded in Sri Lanka in the year 2016 and the highest number of patients was identified in Colombo district. Together with the commercial, industrial, and other supporting services, the district suffers from rapid urbanization and high population density. Thus, drainage and waste disposal patterns of the people in this area exert an additional pressure to the environment. The district is situated in the wet zone and thus low lying lands constitute the largest portion of the district. This situation additionally facilitates mosquito breeding sites. Therefore, the purpose of the present study was to assess the spatial and temporal distribution patterns of dengue epidemic in Kolonnawa MOH area (Medical Officer of Health) in the district of Colombo. The study was carried out using 615 recorded dengue cases in Kollonnawa MOH area during the south east monsoon season from May to September 2016. The Moran’s I and Kernel density estimation were used as analytical methods. The analysis of data was accomplished through the integrated use of ArcGIS 10.1 software packages along with Microsoft Excel analytical tool. Field observation was also carried out for verification purposes during the study period. Results of the Moran’s I index indicates that the spatial distribution of dengue cases showed a cluster distribution pattern across the area. Kernel density estimation emphasis that dengue cases are high where the population has gathered, especially in areas comprising housing schemes. Results of the Kernel Density estimation further discloses that hot spots of dengue epidemic are located in the western half of the Kolonnawa MOH area, which is close to the Colombo municipal boundary and there is a significant relationship with high population density and unplanned urban land use practices. Results of the field observation confirm that the drainage systems in these areas function poorly and careless waste disposal methods of the people further encourage mosquito breeding sites. This situation has evolved harmfully from a public health issue to a social problem, which ultimately impacts on the economy and social lives of the country.

Keywords: Dengue epidemic, health hazards, Kernel density, Moran’s I, Sri Lanka

Procedia PDF Downloads 302
177 A Hierarchical Method for Multi-Class Probabilistic Classification Vector Machines

Authors: P. Byrnes, F. A. DiazDelaO

Abstract:

The Support Vector Machine (SVM) has become widely recognised as one of the leading algorithms in machine learning for both regression and binary classification. It expresses predictions in terms of a linear combination of kernel functions, referred to as support vectors. Despite its popularity amongst practitioners, SVM has some limitations, with the most significant being the generation of point prediction as opposed to predictive distributions. Stemming from this issue, a probabilistic model namely, Probabilistic Classification Vector Machines (PCVM), has been proposed which respects the original functional form of SVM whilst also providing a predictive distribution. As physical system designs become more complex, an increasing number of classification tasks involving industrial applications consist of more than two classes. Consequently, this research proposes a framework which allows for the extension of PCVM to a multi class setting. Additionally, the original PCVM framework relies on the use of type II maximum likelihood to provide estimates for both the kernel hyperparameters and model evidence. In a high dimensional multi class setting, however, this approach has been shown to be ineffective due to bad scaling as the number of classes increases. Accordingly, we propose the application of Markov Chain Monte Carlo (MCMC) based methods to provide a posterior distribution over both parameters and hyperparameters. The proposed framework will be validated against current multi class classifiers through synthetic and real life implementations.

Keywords: probabilistic classification vector machines, multi class classification, MCMC, support vector machines

Procedia PDF Downloads 222
176 Physics’s Practical Based on Android as a Motivator in Learning Physics

Authors: Yuni Rochmawati, Luluk Il Mukarromah

Abstract:

Android is a mobile operating system (OS) based on the linux kerrnel and currently developed by google. With a user interface based on direct manipulation, Android is designed primarily for touchscreen mobile deviced such as smartphone and tablet computer, with specialized user interface for television (Android TV), cars (Android Auto), and wrist watches (Android Wear). Now, almost all peoples using smartphone. Smartphone seems to be a must-have object, because smartphone has many benefits. In addition, of course smartphone have many benefits for education, like resume of lesson that form of e-book. However, this article is not about resume of lesson. This article is about practical based on android, exactly for physics. Therefore, we will explain our idea about physics’s practical based on android and for output, we wish many students will be like to studying physics and always remember about physics’s phenomenon by physics’s practical based on android.

Keywords: android, smartphone, physics, practical

Procedia PDF Downloads 243
175 Modeling Palm Oil Quality During the Ripening Process of Fresh Fruits

Authors: Afshin Keshvadi, Johari Endan, Haniff Harun, Desa Ahmad, Farah Saleena

Abstract:

Experiments were conducted to develop a model for analyzing the ripening process of oil palm fresh fruits in relation to oil yield and oil quality of palm oil produced. This research was carried out on 8-year-old Tenera (Dura × Pisifera) palms planted in 2003 at the Malaysian Palm Oil Board Research Station. Fresh fruit bunches were harvested from designated palms during January till May of 2010. The bunches were divided into three regions (top, middle and bottom), and fruits from the outer and inner layers were randomly sampled for analysis at 8, 12, 16 and 20 weeks after anthesis to establish relationships between maturity and oil development in the mesocarp and kernel. Computations on data related to ripening time, oil content and oil quality were performed using several computer software programs (MSTAT-C, SAS and Microsoft Excel). Nine nonlinear mathematical models were utilized using MATLAB software to fit the data collected. The results showed mean mesocarp oil percent increased from 1.24 % at 8 weeks after anthesis to 29.6 % at 20 weeks after anthesis. Fruits from the top part of the bunch had the highest mesocarp oil content of 10.09 %. The lowest kernel oil percent of 0.03 % was recorded at 12 weeks after anthesis. Palmitic acid and oleic acid comprised of more than 73 % of total mesocarp fatty acids at 8 weeks after anthesis, and increased to more than 80 % at fruit maturity at 20 weeks. The Logistic model with the highest R2 and the lowest root mean square error was found to be the best fit model.

Keywords: oil palm, oil yield, ripening process, anthesis, fatty acids, modeling

Procedia PDF Downloads 316
174 Physics-Informed Convolutional Neural Networks for Reservoir Simulation

Authors: Jiangxia Han, Liang Xue, Keda Chen

Abstract:

Despite the significant progress over the last decades in reservoir simulation using numerical discretization, meshing is complex. Moreover, the high degree of freedom of the space-time flow field makes the solution process very time-consuming. Therefore, we present Physics-Informed Convolutional Neural Networks(PICNN) as a hybrid scientific theory and data method for reservoir modeling. Besides labeled data, the model is driven by the scientific theories of the underlying problem, such as governing equations, boundary conditions, and initial conditions. PICNN integrates governing equations and boundary conditions into the network architecture in the form of a customized convolution kernel. The loss function is composed of data matching, initial conditions, and other measurable prior knowledge. By customizing the convolution kernel and minimizing the loss function, the neural network parameters not only fit the data but also honor the governing equation. The PICNN provides a methodology to model and history-match flow and transport problems in porous media. Numerical results demonstrate that the proposed PICNN can provide an accurate physical solution from a limited dataset. We show how this method can be applied in the context of a forward simulation for continuous problems. Furthermore, several complex scenarios are tested, including the existence of data noise, different work schedules, and different good patterns.

Keywords: convolutional neural networks, deep learning, flow and transport in porous media, physics-informed neural networks, reservoir simulation

Procedia PDF Downloads 147
173 Hybrid Multipath Congestion Control

Authors: Akshit Singhal, Xuan Wang, Zhijun Wang, Hao Che, Hong Jiang

Abstract:

Multiple Path Transmission Control Protocols (MPTCPs) allow flows to explore path diversity to improve the throughput, reliability and network resource utilization. However, the existing solutions may discourage users to adopt the solutions in the face of multipath scenario where different paths are charged based on different pricing structures, e.g., WiFi vs cellular connections, widely available for mobile phones. In this paper, we propose a Hybrid MPTCP (H-MPTCP) with a built-in mechanism to incentivize users to use multiple paths with different pricing structures. In the meantime, H-MPTCP preserves the nice properties enjoyed by the state-of-the-art MPTCP solutions. Extensive real Linux implementation results verify that H-MPTCP can indeed achieve the design objectives.

Keywords: network, TCP, WiFi, cellular, congestion control

Procedia PDF Downloads 720
172 Existence of Minimal and Maximal Mild Solutions for Non-Local in Time Subdiffusion Equations of Neutral Type

Authors: Jorge Gonzalez-Camus

Abstract:

In this work is proved the existence of at least one minimal and maximal mild solutions to the Cauchy problem, for fractional evolution equation of neutral type, involving a general kernel. An operator A generating a resolvent family and integral resolvent family on a Banach space X and a kernel belonging to a large class appears in the equation, which covers many relevant cases from physics applications, in particular, the important case of time - fractional evolution equations of neutral type. The main tool used in this work was the Kuratowski measure of noncompactness and fixed point theorems, specifically Darbo-type, and an iterative method of lower and upper solutions, based in an order in X induced by a normal cone P. Initially, the equation is a Cauchy problem, involving a fractional derivate in Caputo sense. Then, is formulated the equivalent integral version, and defining a convenient functional, using the theory of resolvent families, and verifying the hypothesis of the fixed point theorem of Darbo type, give us the existence of mild solution for the initial problem. Furthermore, the existence of minimal and maximal mild solutions was proved through in an iterative method of lower and upper solutions, using the Azcoli-Arzela Theorem, and the Gronwall’s inequality. Finally, we recovered the case derivate in Caputo sense.

Keywords: fractional evolution equations, Volterra integral equations, minimal and maximal mild solutions, neutral type equations, non-local in time equations

Procedia PDF Downloads 178
171 Combustion and Emissions Performance of Syngas Fuels Derived from Palm Kernel Shell and Polyethylene (PE) Waste via Catalytic Steam Gasification

Authors: Chaouki Ghenai

Abstract:

Computational fluid dynamics analysis of the burning of syngas fuels derived from biomass and plastic solid waste mixture through gasification process is presented in this paper. The syngas fuel is burned in gas turbine can combustor. Gas turbine can combustor with swirl is designed to burn the fuel efficiently and reduce the emissions. The main objective is to test the impact of the alternative syngas fuel compositions and lower heating value on the combustion performance and emissions. The syngas fuel is produced by blending Palm Kernel Shell (PKS) with Polyethylene (PE) waste via catalytic steam gasification (fluidized bed reactor). High hydrogen content syngas fuel was obtained by mixing 30% PE waste with PKS. The syngas composition obtained through the gasification process is 76.2% H2, 8.53% CO, 4.39% CO2 and 10.90% CH4. The lower heating value of the syngas fuel is LHV = 15.98 MJ/m3. Three fuels were tested in this study natural gas (100%CH4), syngas fuel and pure hydrogen (100% H2). The power from the combustor was kept constant for all the fuels tested in this study. The effect of syngas fuel composition and lower heating value on the flame shape, gas temperature, mass of carbon dioxide (CO2) and nitrogen oxides (NOX) per unit of energy generation is presented in this paper. The results show an increase of the peak flame temperature and NO mass fractions for the syngas and hydrogen fuels compared to natural gas fuel combustion. Lower average CO2 emissions at the exit of the combustor are obtained for the syngas compared to the natural gas fuel.

Keywords: CFD, combustion, emissions, gas turbine combustor, gasification, solid waste, syngas, waste to energy

Procedia PDF Downloads 593
170 Analysis of Factors Affecting the Number of Infant and Maternal Mortality in East Java with Geographically Weighted Bivariate Generalized Poisson Regression Method

Authors: Luh Eka Suryani, Purhadi

Abstract:

Poisson regression is a non-linear regression model with response variable in the form of count data that follows Poisson distribution. Modeling for a pair of count data that show high correlation can be analyzed by Poisson Bivariate Regression. Data, the number of infant mortality and maternal mortality, are count data that can be analyzed by Poisson Bivariate Regression. The Poisson regression assumption is an equidispersion where the mean and variance values are equal. However, the actual count data has a variance value which can be greater or less than the mean value (overdispersion and underdispersion). Violations of this assumption can be overcome by applying Generalized Poisson Regression. Characteristics of each regency can affect the number of cases occurred. This issue can be overcome by spatial analysis called geographically weighted regression. This study analyzes the number of infant mortality and maternal mortality based on conditions in East Java in 2016 using Geographically Weighted Bivariate Generalized Poisson Regression (GWBGPR) method. Modeling is done with adaptive bisquare Kernel weighting which produces 3 regency groups based on infant mortality rate and 5 regency groups based on maternal mortality rate. Variables that significantly influence the number of infant and maternal mortality are the percentages of pregnant women visit health workers at least 4 times during pregnancy, pregnant women get Fe3 tablets, obstetric complication handled, clean household and healthy behavior, and married women with the first marriage age under 18 years.

Keywords: adaptive bisquare kernel, GWBGPR, infant mortality, maternal mortality, overdispersion

Procedia PDF Downloads 162
169 Design of Incident Information System in IoT Virtualization Platform

Authors: Amon Olimov, Umarov Jamshid, Dae-Ho Kim, Chol-U Lee, Ryum-Duck Oh

Abstract:

This paper proposes IoT virtualization platform based incident information system. IoT information based environment is the platform that was developed for the purpose of collecting a variety of data by managing regionally scattered IoT devices easily and conveniently in addition to analyzing data collected from roads. Moreover, this paper configured the platform for the purpose of providing incident information based on sensed data. It also provides the same input/output interface as UNIX and Linux by means of matching IoT devices with the directory of file system and also the files. In addition, it has a variety of approaches as to the devices. Thus, it can be applied to not only incident information but also other platforms. This paper proposes the incident information system that identifies and provides various data in real time as to urgent matters on roads based on the existing USN/M2M and IoT visualization platform.

Keywords: incident information system, IoT, virtualization platform, USN, M2M

Procedia PDF Downloads 351
168 A Nonlinear Feature Selection Method for Hyperspectral Image Classification

Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo

Abstract:

For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.

Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine

Procedia PDF Downloads 265
167 Rough Oscillatory Singular Integrals on Rⁿ

Authors: H. M. Al-Qassem, L. Cheng, Y. Pan

Abstract:

In this paper we establish sharp bounds for oscillatory singular integrals with an arbitrary real polynomial phase P. Our kernels are allowed to be rough both on the unit sphere and in the radial direction. We show that the bounds grow no faster than log(deg(P)), which is optimal and was first obtained by Parissis and Papadimitrakis for kernels without any radial roughness. Among key ingredients of our methods are an L¹→L² estimate and extrapolation.

Keywords: oscillatory singular integral, rough kernel, singular integral, Orlicz spaces, Block spaces, extrapolation, L^{p} boundedness

Procedia PDF Downloads 358
166 Improving Communication System through Router Configuration: The Nigerian Navy Experience

Authors: Saidu I. Rambo, Emmanuel O. Ibam, Sunday O. Adewale

Abstract:

The configuration of routers for effective communication in the Nigerian Navy (NN) enables the navy to improve on the current communication systems. The current system is faced with challenges that make the systems partially effective. The major implementation of the system is to configure routers using hierarchical model and obtaining a VSAT option on C-band platform. These routers will act as a link between Naval Headquarters and the Commands under it. The routers main responsibilities are to forward packets from source location to destination using a Link State Routing Protocol (LSRP). Also using the Point to Point Protocol (PPP), creates a strong encrypted password using Challenge Handshake Authentication Protocol (CHAP) which uses one-way hash function of Message Digest 5 (MD5) to provide complete protection against hackers/intruders. Routers can be configured using a Linux operating system or internet work operating system in the Microsoft platform. With this, system packets can be forwarded to various locations more effectively than the present system being used.

Keywords: C-band, communication, router, VSAT

Procedia PDF Downloads 366