Search results for: coconut kernel
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 358

Search results for: coconut kernel

208 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information

Authors: Haifeng Wang, Haili Zhang

Abstract:

Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.

Keywords: computational social science, movie preference, machine learning, SVM

Procedia PDF Downloads 231
207 Food Processing Technology and Packaging: A Case Study of Indian Cashew-Nut Industry

Authors: Parashram Jakappa Patil

Abstract:

India is the global leader in world cashew business and cashew-nut industry is one of the important food processing industries in world. However India is the largest producer, processor, exporter and importer eschew in the world. India is providing cashew to the rest of the world. India is meeting world demand of cashew. India has a tremendous potential of cashew production and export to other countries. Every year India earns more than 2000 cores rupees through cashew trade. Cashew industry is one of the important small scale industries in the country which is playing significant role in rural development. It is generating more than 400000 jobs at remote area and 95% cashew worker are women, it is giving income to poor cashew farmers, majority cashew processing units are small and cottage, it is helping to stop migration from young farmers for employment opportunities, it is motivation rural entrepreneurship development and it is also helping to environment protection etc. Hence India cashew business is very important agribusiness in India which has potential make inclusive development. World Bank and IMF recognized cashew-nut industry is one the important tool for poverty eradication at global level. It shows important of cashew business and its strong existence in India. In spite of such huge potential cashew processing industry is facing different problems such as lack of infrastructure ability, lack of supply of raw cashew, lack of availability of finance, collection of raw cashew, unavailability of warehouse, marketing of cashew kernels, lack of technical knowledge and especially processing technology and packaging of finished products. This industry has great prospects such as scope for more cashew cultivation and cashew production, employment generation, formation of cashew processing units, alcohols production from cashew apple, shield oil production, rural development, poverty elimination, development of social and economic backward class and environment protection etc. This industry has domestic as well as foreign market; India has tremendous potential in this regard. The cashew is a poor men’s crop but rich men’s food. The cashew is a source of income and livelihood for poor farmers. Cashew-nut industry may play very important role in the development of hilly region. The objectives of this paper are to identify problems of cashew processing and use of processing technology, problems of cashew kernel packaging, evolving of cashew processing technology over the year and its impact on final product and impact of good processing by adopting appropriate technology packaging on international trade of cashew-nut. The most important problem of cashew processing industry is that is processing and packaging. Bad processing reduce the quality of cashew kernel at large extent especially broken of cashew kernel which has very less price in market compare to whole cashew kernel and not eligible for export. On the other hand if there is no good packaging of cashew kernel will get moisture which destroy test of it. International trade of cashew-nut is depend of two things one is cashew processing and other is packaging. This study has strong relevance because cashew-nut industry is the labour oriented, where processing technology is not playing important role because 95% processing work is manual. Hence processing work was depending on physical performance of worker which makes presence of large workforce inevitable. There are many cashew processing units closed because they are not getting sufficient work force. However due to advancement in technology slowly this picture is changing and processing work get improve. Therefore it is interesting to explore all the aspects in context of cashew processing and packaging of cashew business.

Keywords: cashew, processing technology, packaging, international trade, change

Procedia PDF Downloads 389
206 Estimation of a Finite Population Mean under Random Non Response Using Improved Nadaraya and Watson Kernel Weights

Authors: Nelson Bii, Christopher Ouma, John Odhiambo

Abstract:

Non-response is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random non-response using auxiliary data. In this study, it is assumed that random non-response occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random non-response. In particular, the auxiliary information is used via an improved Nadaraya-Watson kernel regression technique to compensate for random non-response. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at a 95% coverage rate. The results obtained in this study are useful, for instance, in choosing efficient estimators of the finite population mean in demographic sample surveys.

Keywords: mean squared error, random non-response, two-stage cluster sampling, confidence interval lengths

Procedia PDF Downloads 105
205 Design of a Real Time Closed Loop Simulation Test Bed on a General Purpose Operating System: Practical Approaches

Authors: Pratibha Srivastava, Chithra V. J., Sudhakar S., Nitin K. D.

Abstract:

A closed-loop system comprises of a controller, a response system, and an actuating system. The controller, which is the system under test for us, excites the actuators based on feedback from the sensors in a periodic manner. The sensors should provide the feedback to the System Under Test (SUT) within a deterministic time post excitation of the actuators. Any delay or miss in the generation of response or acquisition of excitation pulses may lead to control loop controller computation errors, which can be catastrophic in certain cases. Such systems categorised as hard real-time systems that need special strategies. The real-time operating systems available in the market may be the best solutions for such kind of simulations, but they pose limitations like the availability of the X Windows system, graphical interfaces, other user tools. In this paper, we present strategies that can be used on a general purpose operating system (Bare Linux Kernel) to achieve a deterministic deadline and hence have the added advantages of a GPOS with real-time features. Techniques shall be discussed how to make the time-critical application run with the highest priority in an uninterrupted manner, reduced network latency for distributed architecture, real-time data acquisition, data storage, and retrieval, user interactions, etc.

Keywords: real time data acquisition, real time kernel preemption, scheduling, network latency

Procedia PDF Downloads 103
204 A Study on the Performance of 2-PC-D Classification Model

Authors: Nurul Aini Abdul Wahab, Nor Syamim Halidin, Sayidatina Aisah Masnan, Nur Izzati Romli

Abstract:

There are many applications of principle component method for reducing the large set of variables in various fields. Fisher’s Discriminant function is also a popular tool for classification. In this research, the researcher focuses on studying the performance of Principle Component-Fisher’s Discriminant function in helping to classify rice kernels to their defined classes. The data were collected on the smells or odour of the rice kernel using odour-detection sensor, Cyranose. 32 variables were captured by this electronic nose (e-nose). The objective of this research is to measure how well a combination model, between principle component and linear discriminant, to be as a classification model. Principle component method was used to reduce all 32 variables to a smaller and manageable set of components. Then, the reduced components were used to develop the Fisher’s Discriminant function. In this research, there are 4 defined classes of rice kernel which are Aromatic, Brown, Ordinary and Others. Based on the output from principle component method, the 32 variables were reduced to only 2 components. Based on the output of classification table from the discriminant analysis, 40.76% from the total observations were correctly classified into their classes by the PC-Discriminant function. Indirectly, it gives an idea that the classification model developed has committed to more than 50% of misclassifying the observations. As a conclusion, the Fisher’s Discriminant function that was built on a 2-component from PCA (2-PC-D) is not satisfying to classify the rice kernels into its defined classes.

Keywords: classification model, discriminant function, principle component analysis, variable reduction

Procedia PDF Downloads 303
203 TACTICAL: Ram Image Retrieval in Linux Using Protected Mode Architecture’s Paging Technique

Authors: Sedat Aktas, Egemen Ulusoy, Remzi Yildirim

Abstract:

This article explains how to get a ram image from a computer with a Linux operating system and what steps should be followed while getting it. What we mean by taking a ram image is the process of dumping the physical memory instantly and writing it to a file. This process can be likened to taking a picture of everything in the computer’s memory at that moment. This process is very important for tools that analyze ram images. Volatility can be given as an example because before these tools can analyze ram, images must be taken. These tools are used extensively in the forensic world. Forensic, on the other hand, is a set of processes for digitally examining the information on any computer or server on behalf of official authorities. In this article, the protected mode architecture in the Linux operating system is examined, and the way to save the image sample of the kernel driver and system memory to disk is followed. Tables and access methods to be used in the operating system are examined based on the basic architecture of the operating system, and the most appropriate methods and application methods are transferred to the article. Since there is no article directly related to this study on Linux in the literature, it is aimed to contribute to the literature with this study on obtaining ram images. LIME can be mentioned as a similar tool, but there is no explanation about the memory dumping method of this tool. Considering the frequency of use of these tools, the contribution of the study in the field of forensic medicine has been the main motivation of the study due to the intense studies on ram image in the field of forensics.

Keywords: linux, paging, addressing, ram-image, memory dumping, kernel modules, forensic

Procedia PDF Downloads 72
202 An Assessment of Health Hazards in Urban Communities: A Study of Spatial-Temporal Variations of Dengue Epidemic in Colombo, Sri Lanka

Authors: U. Thisara G. Perera, C. M. Kanchana N. K. Chandrasekara

Abstract:

Dengue is an epidemic which is spread by Aedes Egyptai and Aedes Albopictus mosquitoes. The cases of dengue show a dramatic growth rate of the epidemic in urban and semi urban areas spatially in tropical and sub-tropical regions of the world. Incidence of dengue has become a prominent reason for hospitalization and deaths in Asian countries, including Sri Lanka. During the last decade the dengue epidemic began to spread from urban to semi-urban and then to rural settings of the country. The highest number of dengue infected patients was recorded in Sri Lanka in the year 2016 and the highest number of patients was identified in Colombo district. Together with the commercial, industrial, and other supporting services, the district suffers from rapid urbanization and high population density. Thus, drainage and waste disposal patterns of the people in this area exert an additional pressure to the environment. The district is situated in the wet zone and thus low lying lands constitute the largest portion of the district. This situation additionally facilitates mosquito breeding sites. Therefore, the purpose of the present study was to assess the spatial and temporal distribution patterns of dengue epidemic in Kolonnawa MOH area (Medical Officer of Health) in the district of Colombo. The study was carried out using 615 recorded dengue cases in Kollonnawa MOH area during the south east monsoon season from May to September 2016. The Moran’s I and Kernel density estimation were used as analytical methods. The analysis of data was accomplished through the integrated use of ArcGIS 10.1 software packages along with Microsoft Excel analytical tool. Field observation was also carried out for verification purposes during the study period. Results of the Moran’s I index indicates that the spatial distribution of dengue cases showed a cluster distribution pattern across the area. Kernel density estimation emphasis that dengue cases are high where the population has gathered, especially in areas comprising housing schemes. Results of the Kernel Density estimation further discloses that hot spots of dengue epidemic are located in the western half of the Kolonnawa MOH area, which is close to the Colombo municipal boundary and there is a significant relationship with high population density and unplanned urban land use practices. Results of the field observation confirm that the drainage systems in these areas function poorly and careless waste disposal methods of the people further encourage mosquito breeding sites. This situation has evolved harmfully from a public health issue to a social problem, which ultimately impacts on the economy and social lives of the country.

Keywords: Dengue epidemic, health hazards, Kernel density, Moran’s I, Sri Lanka

Procedia PDF Downloads 268
201 A Hierarchical Method for Multi-Class Probabilistic Classification Vector Machines

Authors: P. Byrnes, F. A. DiazDelaO

Abstract:

The Support Vector Machine (SVM) has become widely recognised as one of the leading algorithms in machine learning for both regression and binary classification. It expresses predictions in terms of a linear combination of kernel functions, referred to as support vectors. Despite its popularity amongst practitioners, SVM has some limitations, with the most significant being the generation of point prediction as opposed to predictive distributions. Stemming from this issue, a probabilistic model namely, Probabilistic Classification Vector Machines (PCVM), has been proposed which respects the original functional form of SVM whilst also providing a predictive distribution. As physical system designs become more complex, an increasing number of classification tasks involving industrial applications consist of more than two classes. Consequently, this research proposes a framework which allows for the extension of PCVM to a multi class setting. Additionally, the original PCVM framework relies on the use of type II maximum likelihood to provide estimates for both the kernel hyperparameters and model evidence. In a high dimensional multi class setting, however, this approach has been shown to be ineffective due to bad scaling as the number of classes increases. Accordingly, we propose the application of Markov Chain Monte Carlo (MCMC) based methods to provide a posterior distribution over both parameters and hyperparameters. The proposed framework will be validated against current multi class classifiers through synthetic and real life implementations.

Keywords: probabilistic classification vector machines, multi class classification, MCMC, support vector machines

Procedia PDF Downloads 200
200 Antioxidant Efficacy of Lovi (Flacourtia inermis) Peel Extract in Edible Oils during Storage

Authors: Sasini U. G. Nanayakkara, Nishala E. Wedamulla, W. A. J. P. Wijesinghe

Abstract:

Lovi (Flacourtia inermis) is an underutilized fruit crop grown in Sri Lanka with promising antioxidant properties; thus, exhibits the great potential to use as a natural antioxidant. With the concern of synthetic antioxidants, there is a growing trend towards the addition of a natural antioxidant to retard the rancidity of edible oils. Hence, in this backdrop, extract obtained from the peel of F. inermis fruit was used to retard the rancidity of selected edible oils. Free fatty acid (FFA) content and peroxide value (PV) of sunflower oil (SO) and virgin coconut oil (VCO) were measured at 3-day intervals for 21 days at 65 ± 5°C after addition of extract at 500, 1000, 2000 ppm levels and α-tocopherol at 500 ppm level was used as positive control. SO and VCO without added extract was used as the control. The extract was prepared with 70% ethanol using ultrasound-assisted extraction, and antioxidant efficacy and total phenolic content (TPC) of the extract were measured using 2,2-diphenyl-1-picrylhydrazyl (DPPH) radical scavenging capacity and Folin-Ciocalteu method respectively. Antioxidant activity (IC50) and TPC of the extract were 227.14 ± 4.12 µgmL⁻¹ and 4.87 ± 0.01 mg GAE per gram, respectively. During the storage period, FFA content and PV of both oils were increased with time. However, SO showed comparatively high PV than that of VCO and thereby indicate the progression of lipid oxidation as PV is a good indicator of the extent of primary oxidative products formed in oils. The most effective extract concentration was 2000 ppm. After 21 days of storage, VCO (control) sample exhibited significantly (p < 0.05) high FFA (0.36%) and PV (1.93 meq kg⁻¹) than that of VCO with 1000 ppm (FFA: 0.35%; PV: 1.72 meq kg⁻¹) and 2000 ppm (FFA: 0.28%; PV: 1.19 meq kg-1) levels of extract. Thus, demonstrates the efficacy of lovi peel extract in retardation of lipid oxidation of edible oils during storage at higher concentrations of the extract addition. Moreover, FFA and PV of SO (FFA: 0.10%; PV: 12.38 meq kg⁻¹) and VCO (FFA: 0.28%; PV: 1.19 meq kg⁻¹) at 2000 ppm level of extract were significantly (p < 0.05) lower than that of positive control: SO with α-tocopherol (FFA: 0.22%, PV: 17.94 meq kg⁻¹) and VCO with α-tocopherol (FFA: 0.29%, PV: 1.39 meq kg⁻¹) after 21 days. Accordingly, lovi peel extract at 2000 ppm level was more effective than α-tocopherol in retardation of lipid oxidation of edible oils. In conclusion, lovi peel extract has strong antioxidant properties and can be used as a natural antioxidant to inhibit deteriorative oxidation of edible oils.

Keywords: antioxidant, Flacourtia inermis, peroxide value, virgin coconut oil

Procedia PDF Downloads 98
199 Modeling Palm Oil Quality During the Ripening Process of Fresh Fruits

Authors: Afshin Keshvadi, Johari Endan, Haniff Harun, Desa Ahmad, Farah Saleena

Abstract:

Experiments were conducted to develop a model for analyzing the ripening process of oil palm fresh fruits in relation to oil yield and oil quality of palm oil produced. This research was carried out on 8-year-old Tenera (Dura × Pisifera) palms planted in 2003 at the Malaysian Palm Oil Board Research Station. Fresh fruit bunches were harvested from designated palms during January till May of 2010. The bunches were divided into three regions (top, middle and bottom), and fruits from the outer and inner layers were randomly sampled for analysis at 8, 12, 16 and 20 weeks after anthesis to establish relationships between maturity and oil development in the mesocarp and kernel. Computations on data related to ripening time, oil content and oil quality were performed using several computer software programs (MSTAT-C, SAS and Microsoft Excel). Nine nonlinear mathematical models were utilized using MATLAB software to fit the data collected. The results showed mean mesocarp oil percent increased from 1.24 % at 8 weeks after anthesis to 29.6 % at 20 weeks after anthesis. Fruits from the top part of the bunch had the highest mesocarp oil content of 10.09 %. The lowest kernel oil percent of 0.03 % was recorded at 12 weeks after anthesis. Palmitic acid and oleic acid comprised of more than 73 % of total mesocarp fatty acids at 8 weeks after anthesis, and increased to more than 80 % at fruit maturity at 20 weeks. The Logistic model with the highest R2 and the lowest root mean square error was found to be the best fit model.

Keywords: oil palm, oil yield, ripening process, anthesis, fatty acids, modeling

Procedia PDF Downloads 277
198 Physics-Informed Convolutional Neural Networks for Reservoir Simulation

Authors: Jiangxia Han, Liang Xue, Keda Chen

Abstract:

Despite the significant progress over the last decades in reservoir simulation using numerical discretization, meshing is complex. Moreover, the high degree of freedom of the space-time flow field makes the solution process very time-consuming. Therefore, we present Physics-Informed Convolutional Neural Networks(PICNN) as a hybrid scientific theory and data method for reservoir modeling. Besides labeled data, the model is driven by the scientific theories of the underlying problem, such as governing equations, boundary conditions, and initial conditions. PICNN integrates governing equations and boundary conditions into the network architecture in the form of a customized convolution kernel. The loss function is composed of data matching, initial conditions, and other measurable prior knowledge. By customizing the convolution kernel and minimizing the loss function, the neural network parameters not only fit the data but also honor the governing equation. The PICNN provides a methodology to model and history-match flow and transport problems in porous media. Numerical results demonstrate that the proposed PICNN can provide an accurate physical solution from a limited dataset. We show how this method can be applied in the context of a forward simulation for continuous problems. Furthermore, several complex scenarios are tested, including the existence of data noise, different work schedules, and different good patterns.

Keywords: convolutional neural networks, deep learning, flow and transport in porous media, physics-informed neural networks, reservoir simulation

Procedia PDF Downloads 98
197 Existence of Minimal and Maximal Mild Solutions for Non-Local in Time Subdiffusion Equations of Neutral Type

Authors: Jorge Gonzalez-Camus

Abstract:

In this work is proved the existence of at least one minimal and maximal mild solutions to the Cauchy problem, for fractional evolution equation of neutral type, involving a general kernel. An operator A generating a resolvent family and integral resolvent family on a Banach space X and a kernel belonging to a large class appears in the equation, which covers many relevant cases from physics applications, in particular, the important case of time - fractional evolution equations of neutral type. The main tool used in this work was the Kuratowski measure of noncompactness and fixed point theorems, specifically Darbo-type, and an iterative method of lower and upper solutions, based in an order in X induced by a normal cone P. Initially, the equation is a Cauchy problem, involving a fractional derivate in Caputo sense. Then, is formulated the equivalent integral version, and defining a convenient functional, using the theory of resolvent families, and verifying the hypothesis of the fixed point theorem of Darbo type, give us the existence of mild solution for the initial problem. Furthermore, the existence of minimal and maximal mild solutions was proved through in an iterative method of lower and upper solutions, using the Azcoli-Arzela Theorem, and the Gronwall’s inequality. Finally, we recovered the case derivate in Caputo sense.

Keywords: fractional evolution equations, Volterra integral equations, minimal and maximal mild solutions, neutral type equations, non-local in time equations

Procedia PDF Downloads 141
196 Combustion and Emissions Performance of Syngas Fuels Derived from Palm Kernel Shell and Polyethylene (PE) Waste via Catalytic Steam Gasification

Authors: Chaouki Ghenai

Abstract:

Computational fluid dynamics analysis of the burning of syngas fuels derived from biomass and plastic solid waste mixture through gasification process is presented in this paper. The syngas fuel is burned in gas turbine can combustor. Gas turbine can combustor with swirl is designed to burn the fuel efficiently and reduce the emissions. The main objective is to test the impact of the alternative syngas fuel compositions and lower heating value on the combustion performance and emissions. The syngas fuel is produced by blending Palm Kernel Shell (PKS) with Polyethylene (PE) waste via catalytic steam gasification (fluidized bed reactor). High hydrogen content syngas fuel was obtained by mixing 30% PE waste with PKS. The syngas composition obtained through the gasification process is 76.2% H2, 8.53% CO, 4.39% CO2 and 10.90% CH4. The lower heating value of the syngas fuel is LHV = 15.98 MJ/m3. Three fuels were tested in this study natural gas (100%CH4), syngas fuel and pure hydrogen (100% H2). The power from the combustor was kept constant for all the fuels tested in this study. The effect of syngas fuel composition and lower heating value on the flame shape, gas temperature, mass of carbon dioxide (CO2) and nitrogen oxides (NOX) per unit of energy generation is presented in this paper. The results show an increase of the peak flame temperature and NO mass fractions for the syngas and hydrogen fuels compared to natural gas fuel combustion. Lower average CO2 emissions at the exit of the combustor are obtained for the syngas compared to the natural gas fuel.

Keywords: CFD, combustion, emissions, gas turbine combustor, gasification, solid waste, syngas, waste to energy

Procedia PDF Downloads 560
195 Preparation of Bacterial Cellulose Membranes from Nata de Coco for CO2/CH4 Separation

Authors: Yanin Hosakun, Sujitra Wongkasemjit, Thanyalak Chaisuwan

Abstract:

Carbon dioxide removal from natural gas is an important process because the existence of carbon dioxide in natural gas contributes to pipeline corrosion, reduces the heating value, and takes up volume in the pipeline. In this study, bacterial cellulose was chosen for the CO2/CH4 gas separation membrane due to its unique structure and prominent properties. Additionally, it can simply be obtained by culturing the bacteria so called “Acetobacter xylinum” through fermentation of coconut juice. Bacterial cellulose membranes with and without silver ions were prepared and studied for the separation performance of CO2 and CH4.

Keywords: bacterial cellulose, CO2, CH4 separation, membrane, nata de coco

Procedia PDF Downloads 211
194 Analysis of Factors Affecting the Number of Infant and Maternal Mortality in East Java with Geographically Weighted Bivariate Generalized Poisson Regression Method

Authors: Luh Eka Suryani, Purhadi

Abstract:

Poisson regression is a non-linear regression model with response variable in the form of count data that follows Poisson distribution. Modeling for a pair of count data that show high correlation can be analyzed by Poisson Bivariate Regression. Data, the number of infant mortality and maternal mortality, are count data that can be analyzed by Poisson Bivariate Regression. The Poisson regression assumption is an equidispersion where the mean and variance values are equal. However, the actual count data has a variance value which can be greater or less than the mean value (overdispersion and underdispersion). Violations of this assumption can be overcome by applying Generalized Poisson Regression. Characteristics of each regency can affect the number of cases occurred. This issue can be overcome by spatial analysis called geographically weighted regression. This study analyzes the number of infant mortality and maternal mortality based on conditions in East Java in 2016 using Geographically Weighted Bivariate Generalized Poisson Regression (GWBGPR) method. Modeling is done with adaptive bisquare Kernel weighting which produces 3 regency groups based on infant mortality rate and 5 regency groups based on maternal mortality rate. Variables that significantly influence the number of infant and maternal mortality are the percentages of pregnant women visit health workers at least 4 times during pregnancy, pregnant women get Fe3 tablets, obstetric complication handled, clean household and healthy behavior, and married women with the first marriage age under 18 years.

Keywords: adaptive bisquare kernel, GWBGPR, infant mortality, maternal mortality, overdispersion

Procedia PDF Downloads 129
193 Thermal Analysis of a Composite of Coco Fiber and Látex

Authors: Elmo Thiago Lins Cöuras Ford, Valentina Alessandra Carvalho do Vale

Abstract:

Given the unquestionable need of environmental preservation, the natural fibers have been seen as a salutary alternative for production of composites in substitution to the synthetic fibers, vitreous and metallic. In this work, the behavior of a composite was analyzed done with fiber of the peel of the coconut as reinforcement and latex as head office, when submitted the source of heat. The temperature profiles were verified in the internal surfaces and it expresses of the composite as well as the temperature gradient in the same. It was also analyzed the behavior of this composite when submitted to a cold source. As consequence, in function of the answers of the system, conclusions were reached.

Keywords: natural fiber, composite, temperature, latex, gradient

Procedia PDF Downloads 771
192 A Nonlinear Feature Selection Method for Hyperspectral Image Classification

Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo

Abstract:

For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.

Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine

Procedia PDF Downloads 238
191 Rough Oscillatory Singular Integrals on Rⁿ

Authors: H. M. Al-Qassem, L. Cheng, Y. Pan

Abstract:

In this paper we establish sharp bounds for oscillatory singular integrals with an arbitrary real polynomial phase P. Our kernels are allowed to be rough both on the unit sphere and in the radial direction. We show that the bounds grow no faster than log(deg(P)), which is optimal and was first obtained by Parissis and Papadimitrakis for kernels without any radial roughness. Among key ingredients of our methods are an L¹→L² estimate and extrapolation.

Keywords: oscillatory singular integral, rough kernel, singular integral, Orlicz spaces, Block spaces, extrapolation, L^{p} boundedness

Procedia PDF Downloads 322
190 Biofiltration Odour Removal at Wastewater Treatment Plant Using Natural Materials: Pilot Scale Studies

Authors: D. Lopes, I. I. R. Baptista, R. F. Vieira, J. Vaz, H. Varela, O. M. Freitas, V. F. Domingues, R. Jorge, C. Delerue-Matos, S. A. Figueiredo

Abstract:

Deodorization is nowadays a need in wastewater treatment plants. Nitrogen and sulphur compounds, volatile fatty acids, aldehydes and ketones are responsible for the unpleasant odours, being ammonia, hydrogen sulphide and mercaptans the most common pollutants. Although chemical treatments of the air extracted are efficient, these are more expensive than biological treatments, namely due the use of chemical reagents (commonly sulphuric acid, sodium hypochlorite and sodium hydroxide). Biofiltration offers the advantage of avoiding the use of reagents (only in some cases, nutrients are added in order to increase the treatment efficiency) and can be considered a sustainable process when the packing medium used is of natural origin. In this work the application of some natural materials locally available was studied both at laboratory and pilot scale, in a real wastewater treatment plant. The materials selected for this study were indigenous Portuguese forest materials derived from eucalyptus and pinewood, such as woodchips and bark, and coconut fiber was also used for comparison purposes. Their physico-chemical characterization was performed: density, moisture, pH, buffer and water retention capacity. Laboratory studies involved batch adsorption studies for ammonia and hydrogen sulphide removal and evaluation of microbiological activity. Four pilot-scale biofilters (1 cubic meter volume) were installed at a local wastewater treatment plant treating odours from the effluent receiving chamber. Each biofilter contained a different packing material consisting of mixtures of eucalyptus bark, pine woodchips and coconut fiber, with added buffering agents and nutrients. The odour treatment efficiency was monitored over time, as well as other operating parameters. The operation at pilot scale suggested that between the processes involved in biofiltration - adsorption, absorption and biodegradation - the first dominates at the beginning, while the biofilm is developing. When the biofilm is completely established, and the adsorption capacity of the material is reached, biodegradation becomes the most relevant odour removal mechanism. High odour and hydrogen sulphide removal efficiencies were achieved throughout the testing period (over 6 months), confirming the suitability of the materials selected, and mixtures thereof prepared, for biofiltration applications.

Keywords: ammonia hydrogen sulphide and removal, biofiltration, natural materials, odour control in wastewater treatment plants

Procedia PDF Downloads 274
189 Fuglede-Putnam Theorem for ∗-Class A Operators

Authors: Mohammed Husein Mohammad Rashid

Abstract:

For a bounded linear operator T acting on a complex infinite dimensional Hilbert space ℋ, we say that T is ∗-class A operator (abbreviation T∈A*) if |T²|≥ |T*|². In this article, we prove the following assertions:(i) we establish some conditions which imply the normality of ∗-class A; (ii) we consider ∗-class A operator T ∈ ℬ(ℋ) with reducing kernel such that TX = XS for some X ∈ ℬ(K, ℋ) and prove the Fuglede-Putnam type theorem when adjoint of S ∈ ℬ(K) is dominant operators; (iii) furthermore, we extend the asymmetric Putnam-Fuglede theorem the class of ∗-class A operators.

Keywords: fuglede-putnam theorem, normal operators, ∗-class a operators, dominant operators

Procedia PDF Downloads 53
188 Lean Comic GAN (LC-GAN): a Light-Weight GAN Architecture Leveraging Factorized Convolution and Teacher Forcing Distillation Style Loss Aimed to Capture Two Dimensional Animated Filtered Still Shots Using Mobile Phone Camera and Edge Devices

Authors: Kaustav Mukherjee

Abstract:

In this paper we propose a Neural Style Transfer solution whereby we have created a Lightweight Separable Convolution Kernel Based GAN Architecture (SC-GAN) which will very useful for designing filter for Mobile Phone Cameras and also Edge Devices which will convert any image to its 2D ANIMATED COMIC STYLE Movies like HEMAN, SUPERMAN, JUNGLE-BOOK. This will help the 2D animation artist by relieving to create new characters from real life person's images without having to go for endless hours of manual labour drawing each and every pose of a cartoon. It can even be used to create scenes from real life images.This will reduce a huge amount of turn around time to make 2D animated movies and decrease cost in terms of manpower and time. In addition to that being extreme light-weight it can be used as camera filters capable of taking Comic Style Shots using mobile phone camera or edge device cameras like Raspberry Pi 4,NVIDIA Jetson NANO etc. Existing Methods like CartoonGAN with the model size close to 170 MB is too heavy weight for mobile phones and edge devices due to their scarcity in resources. Compared to the current state of the art our proposed method which has a total model size of 31 MB which clearly makes it ideal and ultra-efficient for designing of camera filters on low resource devices like mobile phones, tablets and edge devices running OS or RTOS. .Owing to use of high resolution input and usage of bigger convolution kernel size it produces richer resolution Comic-Style Pictures implementation with 6 times lesser number of parameters and with just 25 extra epoch trained on a dataset of less than 1000 which breaks the myth that all GAN need mammoth amount of data. Our network reduces the density of the Gan architecture by using Depthwise Separable Convolution which does the convolution operation on each of the RGB channels separately then we use a Point-Wise Convolution to bring back the network into required channel number using 1 by 1 kernel.This reduces the number of parameters substantially and makes it extreme light-weight and suitable for mobile phones and edge devices. The architecture mentioned in the present paper make use of Parameterised Batch Normalization Goodfellow etc al. (Deep Learning OPTIMIZATION FOR TRAINING DEEP MODELS page 320) which makes the network to use the advantage of Batch Norm for easier training while maintaining the non-linear feature capture by inducing the learnable parameters

Keywords: comic stylisation from camera image using GAN, creating 2D animated movie style custom stickers from images, depth-wise separable convolutional neural network for light-weight GAN architecture for EDGE devices, GAN architecture for 2D animated cartoonizing neural style, neural style transfer for edge, model distilation, perceptual loss

Procedia PDF Downloads 96
187 Non-Differentiable Mond-Weir Type Symmetric Duality under Generalized Invexity

Authors: Jai Prakash Verma, Khushboo Verma

Abstract:

In the present paper, a pair of Mond-Weir type non-differentiable multiobjective second-order programming problems, involving two kernel functions, where each of the objective functions contains support function, is formulated. We prove weak, strong and converse duality theorem for the second-order symmetric dual programs under η-pseudoinvexity conditions.

Keywords: non-differentiable multiobjective programming, second-order symmetric duality, efficiency, support function, eta-pseudoinvexity

Procedia PDF Downloads 222
186 Sharp Estimates of Oscillatory Singular Integrals with Rough Kernels

Authors: H. Al-Qassem, L. Cheng, Y. Pan

Abstract:

In this paper, we establish sharp bounds for oscillatory singular integrals with an arbitrary real polynomial phase P. Our kernels are allowed to be rough both on the unit sphere and in the radial direction. We show that the bounds grow no faster than log (deg(P)), which is optimal and was first obtained by Parissis and Papadimitrakis for kernels without any radial roughness. Our results substantially improve many previously known results. Among key ingredients of our methods are an L¹→L² sharp estimate and using extrapolation.

Keywords: oscillatory singular integral, rough kernel, singular integral, orlicz spaces, block spaces, extrapolation, L^{p} boundedness

Procedia PDF Downloads 429
185 Superconvergence of the Iterated Discrete Legendre Galerkin Method for Fredholm-Hammerstein Equations

Authors: Payel Das, Gnaneshwar Nelakanti

Abstract:

In this paper we analyse the iterated discrete Legendre Galerkin method for Fredholm-Hammerstein integral equations with smooth kernel. Using sufficiently accurate numerical quadrature rule, we obtain superconvergence rates for the iterated discrete Legendre Galerkin solutions in both infinity and $L^2$-norm. Numerical examples are given to illustrate the theoretical results.

Keywords: hammerstein integral equations, spectral method, discrete galerkin, numerical quadrature, superconvergence

Procedia PDF Downloads 442
184 Multi-Channel Information Fusion in C-OTDR Monitoring Systems: Various Approaches to Classify of Targeted Events

Authors: Andrey V. Timofeev

Abstract:

The paper presents new results concerning selection of optimal information fusion formula for ensembles of C-OTDR channels. The goal of information fusion is to create an integral classificator designed for effective classification of seismoacoustic target events. The LPBoost (LP-β and LP-B variants), the Multiple Kernel Learning, and Weighing of Inversely as Lipschitz Constants (WILC) approaches were compared. The WILC is a brand new approach to optimal fusion of Lipschitz Classifiers Ensembles. Results of practical usage are presented.

Keywords: Lipschitz Classifier, classifiers ensembles, LPBoost, C-OTDR systems

Procedia PDF Downloads 429
183 Use of Cassava Flour in Cakes Processing

Authors: S. S. Silva, S. M. A. Souza, C. F. P. Oliveira

Abstract:

Brazil's agriculture is a major economic base in the country; in addition, family farming is directly responsible for the production of most agricultural products in Brazil, such as cassava. The number of studies on the use of cassava and its derivatives in the food industry has been increased, which is the basis of this study. Sought to develop a food that take advantage the products from farmers, adding value to these products and to study its effects as a replacement for wheat flour. For such elaborated a gluten-free cake – aiming to meet the needs of the celiac public – containing cassava flour, cane sugar, honey, egg, soya oil, coconut desiccated, baking powder and water. For evaluation of their characteristics technological, physicochemical and texture characterizations were done. Cake showed similar characteristics of cake made with wheat flour and growth and aeration of the dough. In sum up, marketing the product is viable, in that it has a typical overall appearance of cake made of wheat flour, meet the needs of celiac people and value the family farming.

Keywords: baking, cake, cassava flour, celiac disease

Procedia PDF Downloads 386
182 Optimal Production Planning in Aromatic Coconuts Supply Chain Based on Mixed-Integer Linear Programming

Authors: Chaimongkol Limpianchob

Abstract:

This work addresses the problem of production planning that arises in the production of aromatic coconuts from Samudsakhorn province in Thailand. The planning involves the forwarding of aromatic coconuts from the harvest areas to the factory, which is classified into two groups; self-owned areas and contracted areas, the decisions of aromatic coconuts flow in the plant, and addressing a question of which warehouse will be in use. The problem is formulated as a mixed-integer linear programming model within supply chain management framework. The objective function seeks to minimize the total cost including the harvesting, labor and inventory costs. Constraints on the system include the production activities in the company and demand requirements. Numerical results are presented to demonstrate the feasibility of coconuts supply chain model compared with base case.

Keywords: aromatic coconut, supply chain management, production planning, mixed-integer linear programming

Procedia PDF Downloads 426
181 Students Dropout in the Plantation settlement: A Case Study in Sri Lanka

Authors: Irshana Muhamadhu Razmy

Abstract:

Education is one of the main necessities for a modern society to access wealth as well as to achieve social well-being. Education contributes to enhancing as well as developing the social and economic status of an individual and building a vibrant community within a strong nation. The student dropout problem refers to students who enrolled in a school and are later unable to complete their grade education due to multiple factors). In Sri Lanka, the tea plantation sector is a prominent sector. The tea plantation sector is different from other plantation sectors such as palm oil, rubber, and coconut. Therefore, the present study particularly focuses on the influencing factors of student dropout in the tea plantation sector in Sri Lanka by conducting research in the Labookellie estate in Nuwera Eliya District. this research has opted to use both qualitative and quantitative methods. This study examines the factors associated with student dropout namely the family, school, and the social by the characteristic (gender, grade, and ethnicity) in the plantation area in the Labookellie estate.

Keywords: student dropout, school, plantation settlement, social environmental

Procedia PDF Downloads 148
180 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment

Authors: Arindam Chaudhuri

Abstract:

Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.

Keywords: FRSVM, Hadoop, MapReduce, PFRSVM

Procedia PDF Downloads 457
179 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images

Authors: Eiman Kattan, Hong Wei

Abstract:

In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.

Keywords: CNNs, hyperparamters, remote sensing, land cover, land use

Procedia PDF Downloads 139