Search results for: feature extraction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3213

Search results for: feature extraction

2253 Comparison of Polyphonic Profile of a Berry from Two Different Sources, Using an Optimized Extraction Method

Authors: G. Torabian, A. Fathi, P. Valtchev, F. Dehghani

Abstract:

The superior polyphenol content of Sambucus nigra berries has high health potentials for the production of nutraceutical products. Numerous factors influence the polyphenol content of the final products including the berries’ source and the subsequent processing production steps. The aim of this study is to compare the polyphenol content of berries from two different sources and also to optimise the polyphenol extraction process from elderberries. Berries from source B obtained more acceptable physical properties than source A; a single berry from source B was double in size and weight (both wet and dry weight) compared with a source A berry. Despite the appropriate physical characteristics of source B berries, their polyphenolic profile was inferior; as source A berries had 2.3 fold higher total anthocyanin content, and nearly two times greater total phenolic content and total flavonoid content compared to source B. Moreover, the result of this study showed that almost 50 percent of the phenolic content of berries are entrapped within their skin and pulp that potentially cannot be extracted by press juicing. To address this challenge and to increase the total polyphenol yield of the extract, we used cold-shock blade grinding method to break the cell walls. The result of this study showed that using cultivars with higher phenolic content as well as using the whole fruit including juice, skin and pulp can increase polyphenol yield significantly; and thus, may boost the potential of using elderberries as therapeutic products.

Keywords: different sources, elderberry, grinding, juicing, polyphenols

Procedia PDF Downloads 290
2252 A Conceptual Analysis of Right of Taxpayers to Claim Refund in Nigeria

Authors: Hafsat Iyabo Sa'adu

Abstract:

A salient feature of the Nigerian Tax Law is the right of the taxpayer to demand for a refund where excess tax is paid. Section 23 of the Federal Inland Revenue Service (Establishment) Act, 2007 vests Federal Inland Revenue Services with the power to make tax refund as well as set guidelines and requirements for refund process from time to time. In addition, Section 61 of the Federal Inland Revenue Service (Establishment) Act, 2007, empowers the Federal Inland Revenue Services to issue information circular to acquaint stakeholders with the policy on the refund process. A Circular was issued to that effect to correct the position that until after the annual audit of the Service before such excess can be paid to the claimant/taxpayer. But it is amazing that such circular issuance does not feature under the states’ laws. Hence, there is an inconsistencies in the tax paying system in Nigeria. This study, therefore, sets an objective, to examine the trending concept of tax refund in Nigeria. In order to achieve this set objective, a doctrinal study went under way, wherein both federal and states laws were consulted including journals and textbooks. At the end of the research, it was revealed that the law should be specific as to the time frame within which to make the refund. It further revealed that it is essential to put up a legal framework for the tax system to recognize excess payment as debt due from the state. This would provide a foundational framework for the relationship between taxpayers and Federal Inland Revenue Service as well as promote effective tax administration in all the states of the federation. Several Recommendations were made especially relating to legislative passage of ‘’Refund Circular Bill at the states levels’ pursuant to the Federal Inland Revenue Service (Establishment) Act, 2007.

Keywords: claim, Nigeria, refund, right

Procedia PDF Downloads 115
2251 Automatic Detection of Sugarcane Diseases: A Computer Vision-Based Approach

Authors: Himanshu Sharma, Karthik Kumar, Harish Kumar

Abstract:

The major problem in crop cultivation is the occurrence of multiple crop diseases. During the growth stage, timely identification of crop diseases is paramount to ensure the high yield of crops, lower production costs, and minimize pesticide usage. In most cases, crop diseases produce observable characteristics and symptoms. The Surveyors usually diagnose crop diseases when they walk through the fields. However, surveyor inspections tend to be biased and error-prone due to the nature of the monotonous task and the subjectivity of individuals. In addition, visual inspection of each leaf or plant is costly, time-consuming, and labour-intensive. Furthermore, the plant pathologists and experts who can often identify the disease within the plant according to their symptoms in early stages are not readily available in remote regions. Therefore, this study specifically addressed early detection of leaf scald, red rot, and eyespot types of diseases within sugarcane plants. The study proposes a computer vision-based approach using a convolutional neural network (CNN) for automatic identification of crop diseases. To facilitate this, firstly, images of sugarcane diseases were taken from google without modifying the scene, background, or controlling the illumination to build the training dataset. Then, the testing dataset was developed based on the real-time collected images from the sugarcane field from India. Then, the image dataset is pre-processed for feature extraction and selection. Finally, the CNN-based Visual Geometry Group (VGG) model was deployed on the training and testing dataset to classify the images into diseased and healthy sugarcane plants and measure the model's performance using various parameters, i.e., accuracy, sensitivity, specificity, and F1-score. The promising result of the proposed model lays the groundwork for the automatic early detection of sugarcane disease. The proposed research directly sustains an increase in crop yield.

Keywords: automatic classification, computer vision, convolutional neural network, image processing, sugarcane disease, visual geometry group

Procedia PDF Downloads 112
2250 Thermochemical Modelling for Extraction of Lithium from Spodumene and Prediction of Promising Reagents for the Roasting Process

Authors: Allen Yushark Fosu, Ndue Kanari, James Vaughan, Alexandre Changes

Abstract:

Spodumene is a lithium-bearing mineral of great interest due to increasing demand of lithium in emerging electric and hybrid vehicles. The conventional method of processing the mineral for the metal requires inevitable thermal transformation of α-phase to the β-phase followed by roasting with suitable reagents to produce lithium salts for downstream processes. The selection of appropriate reagent for roasting is key for the success of the process and overall lithium recovery. Several researches have been conducted to identify good reagents for the process efficiency, leading to sulfation, alkaline, chlorination, fluorination, and carbonizing as the methods of lithium recovery from the mineral.HSC Chemistry is a thermochemical software that can be used to model metallurgical process feasibility and predict possible reaction products prior to experimental investigation. The software was employed to investigate and explain the various reagent characteristics as employed in literature during spodumene roasting up to 1200°C. The simulation indicated that all used reagents for sulfation and alkaline were feasible in the direction of lithium salt production. Chlorination was only feasible when Cl2 and CaCl2 were used as chlorination agents but not NaCl nor KCl. Depending on the kind of lithium salt formed during carbonizing and fluorination, the process was either spontaneous or nonspontaneous throughout the temperature range investigated. The HSC software was further used to simulate and predict some promising reagents which may be equally good for roasting the mineral for efficient lithium extraction but have not yet been considered by researchers.

Keywords: thermochemical modelling, HSC chemistry software, lithium, spodumene, roasting

Procedia PDF Downloads 155
2249 Integrating Machine Learning and Rule-Based Decision Models for Enhanced B2B Sales Forecasting and Customer Prioritization

Authors: Wenqi Liu, Reginald Bailey

Abstract:

This study explores an advanced approach to enhancing B2B sales forecasting by integrating machine learning models with a rule-based decision framework. The methodology begins with the development of a machine learning classification model to predict conversion likelihood, aiming to improve accuracy over traditional methods like logistic regression. The classification model's effectiveness is measured using metrics such as accuracy, precision, recall, and F1 score, alongside a feature importance analysis to identify key predictors. Following this, a machine learning regression model is used to forecast sales value, with the objective of reducing mean absolute error (MAE) compared to linear regression techniques. The regression model's performance is assessed using MAE, root mean square error (RMSE), and R-squared metrics, emphasizing feature contribution to the prediction. To bridge the gap between predictive analytics and decision-making, a rule-based decision model is introduced that prioritizes customers based on predefined thresholds for conversion probability and predicted sales value. This approach significantly enhances customer prioritization and improves overall sales performance by increasing conversion rates and optimizing revenue generation. The findings suggest that this combined framework offers a practical, data-driven solution for sales teams, facilitating more strategic decision-making in B2B environments.

Keywords: sales forecasting, machine learning, rule-based decision model, customer prioritization, predictive analytics

Procedia PDF Downloads 5
2248 A New Method Separating Relevant Features from Irrelevant Ones Using Fuzzy and OWA Operator Techniques

Authors: Imed Feki, Faouzi Msahli

Abstract:

Selection of relevant parameters from a high dimensional process operation setting space is a problem frequently encountered in industrial process modelling. This paper presents a method for selecting the most relevant fabric physical parameters for each sensory quality feature. The proposed relevancy criterion has been developed using two approaches. The first utilizes a fuzzy sensitivity criterion by exploiting from experimental data the relationship between physical parameters and all the sensory quality features for each evaluator. Next an OWA aggregation procedure is applied to aggregate the ranking lists provided by different evaluators. In the second approach, another panel of experts provides their ranking lists of physical features according to their professional knowledge. Also by applying OWA and a fuzzy aggregation model, the data sensitivity-based ranking list and the knowledge-based ranking list are combined using our proposed percolation technique, to determine the final ranking list. The key issue of the proposed percolation technique is to filter automatically and objectively the relevant features by creating a gap between scores of relevant and irrelevant parameters. It permits to automatically generate threshold that can effectively reduce human subjectivity and arbitrariness when manually choosing thresholds. For a specific sensory descriptor, the threshold is defined systematically by iteratively aggregating (n times) the ranking lists generated by OWA and fuzzy models, according to a specific algorithm. Having applied the percolation technique on a real example, of a well known finished textile product especially the stonewashed denims, usually considered as the most important quality criteria in jeans’ evaluation, we separate the relevant physical features from irrelevant ones for each sensory descriptor. The originality and performance of the proposed relevant feature selection method can be shown by the variability in the number of physical features in the set of selected relevant parameters. Instead of selecting identical numbers of features with a predefined threshold, the proposed method can be adapted to the specific natures of the complex relations between sensory descriptors and physical features, in order to propose lists of relevant features of different sizes for different descriptors. In order to obtain more reliable results for selection of relevant physical features, the percolation technique has been applied for combining the fuzzy global relevancy and OWA global relevancy criteria in order to clearly distinguish scores of the relevant physical features from those of irrelevant ones.

Keywords: data sensitivity, feature selection, fuzzy logic, OWA operators, percolation technique

Procedia PDF Downloads 603
2247 Multi-Temporal Mapping of Built-up Areas Using Daytime and Nighttime Satellite Images Based on Google Earth Engine Platform

Authors: S. Hutasavi, D. Chen

Abstract:

The built-up area is a significant proxy to measure regional economic growth and reflects the Gross Provincial Product (GPP). However, an up-to-date and reliable database of built-up areas is not always available, especially in developing countries. The cloud-based geospatial analysis platform such as Google Earth Engine (GEE) provides an opportunity with accessibility and computational power for those countries to generate the built-up data. Therefore, this study aims to extract the built-up areas in Eastern Economic Corridor (EEC), Thailand using day and nighttime satellite imagery based on GEE facilities. The normalized indices were generated from Landsat 8 surface reflectance dataset, including Normalized Difference Built-up Index (NDBI), Built-up Index (BUI), and Modified Built-up Index (MBUI). These indices were applied to identify built-up areas in EEC. The result shows that MBUI performs better than BUI and NDBI, with the highest accuracy of 0.85 and Kappa of 0.82. Moreover, the overall accuracy of classification was improved from 79% to 90%, and error of total built-up area was decreased from 29% to 0.7%, after night-time light data from the Visible and Infrared Imaging Suite (VIIRS) Day Night Band (DNB). The results suggest that MBUI with night-time light imagery is appropriate for built-up area extraction and be utilize for further study of socioeconomic impacts of regional development policy over the EEC region.

Keywords: built-up area extraction, google earth engine, adaptive thresholding method, rapid mapping

Procedia PDF Downloads 120
2246 Effect of Extraction Methods on the Fatty Acids and Physicochemical Properties of Serendipity Berry Seed Oil

Authors: Olufunmilola A. Abiodun, Adegbola O. Dauda, Ayobami Ojo, Samson A. Oyeyinka

Abstract:

Serendipity berry (Dioscoreophyllum cumminsii diel) is a tropical dioecious rainforest vine and native to tropical Africa. The vine grows during the raining season and is used mainly as sweetener. The sweetener in the berry is known as monellin which is sweeter than sucrose. The sweetener is extracted from the fruits and the seed is discarded. The discarded seeds contain bitter principles but had high yield of oil. Serendipity oil was extracted using three methods (N-hexane, expression and expression/n-hexane). Fatty acids and physicochemical properties of the oil obtained were determined. The oil obtained was clear, liquid and have odour similar to hydrocarbon. The percentage oil yield was 38.59, 12.34 and 49.57% for hexane, expression and expression-hexane method respectively. The seed contained high percentage of oil especially using combination of expression and hexane. Low percentage of oil was obtained using expression method. The refractive index values obtained were 1.443, 1.442 and 1.478 for hexane, expression and expression-hexane methods respectively. Peroxide value obtained for expression-hexane was higher than those for hexane and expression. The viscosities of the oil were 125.8, 128.76 and 126.87 cm³/s for hexane, expression and expression-hexane methods respectively which showed that the oil from expression method was more viscous than the other oils. The major fatty acids in serendipity seed oil were oleic acid (62.81%), linoleic acid (22.65%), linolenic (6.11%), palmitic acid (5.67%), stearic acid (2.21%) in decreasing order. Oleic acid which is monounsaturated fatty acid had the highest value. Total unsaturated fatty acids were 91.574, 92.256 and 90.426% for hexane, expression, and expression-hexane respectively. Combination of expression and hexane for extraction of serendipity oil produced high yield of oil. The oil could be refined for food and non-food application.

Keywords: serendipity seed oil, expression method, fatty acid, hexane

Procedia PDF Downloads 270
2245 Automatic Differential Diagnosis of Melanocytic Skin Tumours Using Ultrasound and Spectrophotometric Data

Authors: Kristina Sakalauskiene, Renaldas Raisutis, Gintare Linkeviciute, Skaidra Valiukeviciene

Abstract:

Cutaneous melanoma is a melanocytic skin tumour, which has a very poor prognosis while is highly resistant to treatment and tends to metastasize. Thickness of melanoma is one of the most important biomarker for stage of disease, prognosis and surgery planning. In this study, we hypothesized that the automatic analysis of spectrophotometric images and high-frequency ultrasonic 2D data can improve differential diagnosis of cutaneous melanoma and provide additional information about tumour penetration depth. This paper presents the novel complex automatic system for non-invasive melanocytic skin tumour differential diagnosis and penetration depth evaluation. The system is composed of region of interest segmentation in spectrophotometric images and high-frequency ultrasound data, quantitative parameter evaluation, informative feature extraction and classification with linear regression classifier. The segmentation of melanocytic skin tumour region in ultrasound image is based on parametric integrated backscattering coefficient calculation. The segmentation of optical image is based on Otsu thresholding. In total 29 quantitative tissue characterization parameters were evaluated by using ultrasound data (11 acoustical, 4 shape and 15 textural parameters) and 55 quantitative features of dermatoscopic and spectrophotometric images (using total melanin, dermal melanin, blood and collagen SIAgraphs acquired using spectrophotometric imaging device SIAscope). In total 102 melanocytic skin lesions (including 43 cutaneous melanomas) were examined by using SIAscope and ultrasound system with 22 MHz center frequency single element transducer. The diagnosis and Breslow thickness (pT) of each MST were evaluated during routine histological examination after excision and used as a reference. The results of this study have shown that automatic analysis of spectrophotometric and high frequency ultrasound data can improve non-invasive classification accuracy of early-stage cutaneous melanoma and provide supplementary information about tumour penetration depth.

Keywords: cutaneous melanoma, differential diagnosis, high-frequency ultrasound, melanocytic skin tumours, spectrophotometric imaging

Procedia PDF Downloads 267
2244 Object-Scene: Deep Convolutional Representation for Scene Classification

Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang

Abstract:

Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.

Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization

Procedia PDF Downloads 327
2243 Techno-Economic Analysis (TEA) of Circular Economy Approach in the Valorisation of Pig Meat Processing Wastes

Authors: Ribeiro A., Vilarinho C., Luisa A., Carvalho J

Abstract:

The pig meat industry generates large volumes of by- and co-products like blood, bones, skin, trimmings, organs, viscera, and skulls, among others, during slaughtering and meat processing and must be treated and disposed of ecologically. The yield of these by-products has been reported to account for about 10% to 15% of the value of the live animal in developed countries, although animal by-products account for about two-thirds of the animal after slaughter. It was selected for further valorization of the principal wastes produced throughout the value chain of pig meat production: Pig Manure, Pig Bones, Fats, Skins, Pig Hair, Wastewater, Wastewater sludges, and other animal subproducts type III. According to the potential valorization options, these wastes will be converted into Biomethane, Fertilizers (phosphorus and digestate), Hydroxyapatite, and protein hydrolysates (Keratin and Collagen). This work includes comprehensive technical and economic analyses (TEA) for each valorization route or applied technology. Metrics such as Net Present Value (NPV), Internal Rate of Return (IRR), and payback periods were used to evaluate economic feasibility. From this analysis, it can be concluded that, for Biogas Production, the scenarios using pig manure, wastewater sludges and mixed grass and leguminous wastes presented a remarkably high economic feasibility. Scenarios showed high economic feasibility with a positive payback period, NPV, and IRR. The optimal scenario combining pig manure with mixed grass and leguminous wastes had a payback period of 1.2 years and produced 427,6269 m³ of biomethane annually. Regarding the Chemical Extraction of Phosphorous and Nitrogen, results proved that the process is economically unviable due to negative cash flows despite high recovery rates. The TEA of Hydrolysis and Extraction of Keratin Hydrolysates indicate that a unit processing and valorizing 10 tons of pig hair per year for the production of keratin hydrolysate has an NPV of 907,940 €, an IRR of 13.07%, and a Payback period of 5.41 years. All of these indicators suggest a highly potential project to explore in the future. On the opposite, the results of Hydrolysis and Extraction of Collagen Hydrolysates showed a process economically unviable with negative cash flows in all scenarios due to the high-fat content in raw materials. In fact, the results from the valorization of 10 tons of pig skin had a negative cash flow of 453 743,88 €. TEA results of Extraction and purification of Hydroxyapatite from Pig Bones with Pyrolysis indicate that unit processing and valorizing 10 tons of pig bones per year for the production of hydroxyapatite has an NPV of 1 274 819,00 €, an IRR of 65.43%, and a Payback period of 1,5 years over a timeline of 10 years with a discount rate of 10%. These valorization routes, circular economy and bio-refinery approach offer significant contributions to sustainable bio-based operations within the agri-food industry. This approach transforms waste into valuable resources, enhancing both environmental and economic outcomes and contributing to a more sustainable and circular bioeconomy.

Keywords: techno-economic analysis (TEA), pig meat processing wastes, circular economy, bio-refinery

Procedia PDF Downloads 5
2242 Unveiling Comorbidities in Irritable Bowel Syndrome: A UK BioBank Study utilizing Supervised Machine Learning

Authors: Uswah Ahmad Khan, Muhammad Moazam Fraz, Humayoon Shafique Satti, Qasim Aziz

Abstract:

Approximately 10-14% of the global population experiences a functional disorder known as irritable bowel syndrome (IBS). The disorder is defined by persistent abdominal pain and an irregular bowel pattern. IBS significantly impairs work productivity and disrupts patients' daily lives and activities. Although IBS is widespread, there is still an incomplete understanding of its underlying pathophysiology. This study aims to help characterize the phenotype of IBS patients by differentiating the comorbidities found in IBS patients from those in non-IBS patients using machine learning algorithms. In this study, we extracted samples coding for IBS from the UK BioBank cohort and randomly selected patients without a code for IBS to create a total sample size of 18,000. We selected the codes for comorbidities of these cases from 2 years before and after their IBS diagnosis and compared them to the comorbidities in the non-IBS cohort. Machine learning models, including Decision Trees, Gradient Boosting, Support Vector Machine (SVM), AdaBoost, Logistic Regression, and XGBoost, were employed to assess their accuracy in predicting IBS. The most accurate model was then chosen to identify the features associated with IBS. In our case, we used XGBoost feature importance as a feature selection method. We applied different models to the top 10% of features, which numbered 50. Gradient Boosting, Logistic Regression and XGBoost algorithms yielded a diagnosis of IBS with an optimal accuracy of 71.08%, 71.427%, and 71.53%, respectively. Among the comorbidities most closely associated with IBS included gut diseases (Haemorrhoids, diverticular diseases), atopic conditions(asthma), and psychiatric comorbidities (depressive episodes or disorder, anxiety). This finding emphasizes the need for a comprehensive approach when evaluating the phenotype of IBS, suggesting the possibility of identifying new subsets of IBS rather than relying solely on the conventional classification based on stool type. Additionally, our study demonstrates the potential of machine learning algorithms in predicting the development of IBS based on comorbidities, which may enhance diagnosis and facilitate better management of modifiable risk factors for IBS. Further research is necessary to confirm our findings and establish cause and effect. Alternative feature selection methods and even larger and more diverse datasets may lead to more accurate classification models. Despite these limitations, our findings highlight the effectiveness of Logistic Regression and XGBoost in predicting IBS diagnosis.

Keywords: comorbidities, disease association, irritable bowel syndrome (IBS), predictive analytics

Procedia PDF Downloads 114
2241 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes

Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono

Abstract:

Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is a widely used approach for LV segmentation but suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is proposed to improve the accuracy and speed of the model-based segmentation. Firstly, a robust and efficient detector based on Hough forest is proposed to localize cardiac feature points, and such points are used to predict the initial fitting of the LV shape model. Secondly, to achieve more accurate and detailed segmentation, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. The performance of the proposed method is evaluated on a dataset of 800 cardiac ultrasound images that are mostly of abnormal shapes. The proposed method is compared to several combinations of ASM and existing initialization methods. The experiment results demonstrate that the accuracy of feature point detection for initialization was improved by 40% compared to the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops, thus speeding up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.

Keywords: hough forest, active shape model, segmentation, cardiac left ventricle

Procedia PDF Downloads 335
2240 Rapid Identification and Diagnosis of the Pathogenic Leptospiras through Comparison among Culture, PCR and Real Time PCR Techniques from Samples of Human and Mouse Feces

Authors: S. Rostampour Yasouri, M. Ghane, M. Doudi

Abstract:

Leptospirosis is one of the most significant infectious and zoonotic diseases along with global spreading. This disease is causative agent of economoic losses and human fatalities in various countries, including Northern provinces of Iran. The aim of this research is to identify and compare the rapid diagnostic techniques of pathogenic leptospiras, considering the multifacetedness of the disease from a clinical manifestation and premature death of patients. In the spring and summer of 2020-2022, 25 fecal samples were collected from suspected leptospirosis patients and 25 Fecal samples from mice residing in the rice fields and factories in Tonekabon city. Samples were prepared by centrifugation and passing through membrane filters. Culture technique was used in liquid and solid EMJH media during one month of incubation at 30°C. Then, the media were examined microscopically. DNA extraction was conducted by extraction Kit. Diagnosis of leptospiras was enforced by PCR and Real time PCR (SYBR Green) techniques using lipL32 specific primer. Out of the patients, 11 samples (44%) and 8 samples (32%) were determined to be pathogenic Leptospira by Real time PCR and PCR technique, respectively. Out of the mice, 9 Samples (36%) and 3 samples (12%) were determined to be pathogenic Leptospira by the mentioned techniques, respectively. Although the culture technique is considered to be the gold standard technique, but due to the slow growth of pathogenic Leptospira and lack of colony formation of some species, it is not a fast technique. Real time PCR allowed rapid diagnosis with much higher accuracy compared to PCR because PCR could not completely identify samples with lower microbial load.

Keywords: culture, pathogenic leptospiras, PCR, real time PCR

Procedia PDF Downloads 78
2239 Recognition and Counting Algorithm for Sub-Regional Objects in a Handwritten Image through Image Sets

Authors: Kothuri Sriraman, Mattupalli Komal Teja

Abstract:

In this paper, a novel algorithm is proposed for the recognition of hulls in a hand written images that might be irregular or digit or character shape. Identification of objects and internal objects is quite difficult to extract, when the structure of the image is having bulk of clusters. The estimation results are easily obtained while going through identifying the sub-regional objects by using the SASK algorithm. Focusing mainly to recognize the number of internal objects exist in a given image, so as it is shadow-free and error-free. The hard clustering and density clustering process of obtained image rough set is used to recognize the differentiated internal objects, if any. In order to find out the internal hull regions it involves three steps pre-processing, Boundary Extraction and finally, apply the Hull Detection system. By detecting the sub-regional hulls it can increase the machine learning capability in detection of characters and it can also be extend in order to get the hull recognition even in irregular shape objects like wise black holes in the space exploration with their intensities. Layered hulls are those having the structured layers inside while it is useful in the Military Services and Traffic to identify the number of vehicles or persons. This proposed SASK algorithm is helpful in making of that kind of identifying the regions and can useful in undergo for the decision process (to clear the traffic, to identify the number of persons in the opponent’s in the war).

Keywords: chain code, Hull regions, Hough transform, Hull recognition, Layered Outline Extraction, SASK algorithm

Procedia PDF Downloads 344
2238 Code Embedding for Software Vulnerability Discovery Based on Semantic Information

Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson

Abstract:

Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.

Keywords: code representation, deep learning, source code semantics, vulnerability discovery

Procedia PDF Downloads 153
2237 The Long-Term Effects of Immediate Implantation, Early Implantation and Delayed Implantation at Aesthetics Area

Authors: Xing Wang, Lin Feng, Xuan Zou, Hongchen liu

Abstract:

Immediate Implantation after tooth extraction is considered to be the ideal way to retain the alveolar bone, but some scholars believe the aesthetic effect in the Early Implantation case are more reliable. In this study, 89 patients were added to this retrospective study up to 5 years. Assessment indicators was including the survival of the implant (peri-implant infection, implant loosening, shedding, crowns and occlusal), aesthetics (color and fullness gums, papilla height, probing depth, X-ray alveolar crest height, the patient's own aesthetic satisfaction, doctors aesthetics score), repair defects around the implant (peri-implant bone changes in height and thickness, whether the use of autologous bone graft, whether to use absorption/repair manual nonabsorbable material), treatment time, cost and the use of antibiotics.The results demonstrated that there is no significant difference in long-term success rate of immediate implantation, early implantation and delayed implantation (p> 0.05). But the results indicated immediate implantation group could get get better aesthetic results after two years (p< 0.05), but may increase the risk of complications and failures (p< 0.05). High-risk indicators include gingival recession, labial bone wall damage, thin gingival biotypes, planting position and occlusal restoration bad and so on. No matter which type of implanting methods was selected, the extraction methods and bone defect amplification techniques are observed as a significant factors on aesthetic effect (p< 0.05).

Keywords: immediate implantation, long-term effects, aesthetics area, dental implants

Procedia PDF Downloads 354
2236 In Vitro Antioxidant and Cytotoxic Activities Against Human Oral Cancer and Human Laryngeal Cancer of Limonia acidissima L. Bark Extracts

Authors: Kriyapa lairungruang, Arunporn Itharat

Abstract:

Limonia acidissima L. (LA) (Common name: wood apple, Thai name: ma-khwit) is a medicinal plant which has long been used in Thai traditional medicine. Its bark is used for treatment of diarrhea, abscess, wound healing and inflammation and it is also used in oral cancer. Thus, this research aimed to investigate antioxidant and cytotoxic activities of the LA bark extracts produced by various extraction methods. Different extraction procedures were used to extract LA bark for biological activity testing: boiling in water, maceration with 95% ethanol, maceration with 50% ethanol and water boiling of each the 95% and the 50% ethanolic residues. All extracts were tested for antioxidant activity using DPPH radical scavenging assay, cytotoxic activity against human laryngeal epidermoid carcinoma (HEp-2) cells and human oral epidermoid carcinoma (KB) cells using sulforhodamine B (SRB) assay. The results found that the 95% ethanolic extract of LA bark showed the highest antioxidant activity with EC50 values of 29.76±1.88 µg/ml. For cytotoxic activity, the 50% ethanolic extract showed the best cytotoxic activity against HEp-2 and KB cells with IC50 values of 9.55±1.68 and 18.90±0.86 µg/ml, respectively. This study demonstrated that the 95% ethanolic extract of LA bark showed moderate antioxidant activity and the 50% ethanolic extract provided potent cytotoxic activity against HEp-2 and KB cells. These results confirm the traditional use of LA for the treatment of oral cancer and laryngeal cancer, and also support its ongoing use.

Keywords: antioxidant activity, cytotoxic activity, Laryngeal epidermoid carcinoma, Limonia acidissima L., oral epidermoid carcinoma

Procedia PDF Downloads 476
2235 Color Image Compression/Encryption/Contour Extraction using 3L-DWT and SSPCE Method

Authors: Ali A. Ukasha, Majdi F. Elbireki, Mohammad F. Abdullah

Abstract:

Data security needed in data transmission, storage, and communication to ensure the security. This paper is divided into two parts. This work interests with the color image which is decomposed into red, green and blue channels. The blue and green channels are compressed using 3-levels discrete wavelet transform. The Arnold transform uses to changes the locations of red image channel pixels as image scrambling process. Then all these channels are encrypted separately using the key image that has same original size and are generating using private keys and modulo operations. Performing the X-OR and modulo operations between the encrypted channels images for image pixel values change purpose. The extracted contours from color images recovery can be obtained with accepted level of distortion using single step parallel contour extraction (SSPCE) method. Experiments have demonstrated that proposed algorithm can fully encrypt 2D Color images and completely reconstructed without any distortion. Also shown that the analyzed algorithm has extremely large security against some attacks like salt and pepper and Jpeg compression. Its proof that the color images can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services.

Keywords: SSPCE method, image compression and salt and peppers attacks, bitplanes decomposition, Arnold transform, color image, wavelet transform, lossless image encryption

Procedia PDF Downloads 515
2234 Hybrid Approach for Face Recognition Combining Gabor Wavelet and Linear Discriminant Analysis

Authors: A: Annis Fathima, V. Vaidehi, S. Ajitha

Abstract:

Face recognition system finds many applications in surveillance and human computer interaction systems. As the applications using face recognition systems are of much importance and demand more accuracy, more robustness in the face recognition system is expected with less computation time. In this paper, a hybrid approach for face recognition combining Gabor Wavelet and Linear Discriminant Analysis (HGWLDA) is proposed. The normalized input grayscale image is approximated and reduced in dimension to lower the processing overhead for Gabor filters. This image is convolved with bank of Gabor filters with varying scales and orientations. LDA, a subspace analysis techniques are used to reduce the intra-class space and maximize the inter-class space. The techniques used are 2-dimensional Linear Discriminant Analysis (2D-LDA), 2-dimensional bidirectional LDA ((2D)2LDA), Weighted 2-dimensional bidirectional Linear Discriminant Analysis (Wt (2D)2 LDA). LDA reduces the feature dimension by extracting the features with greater variance. k-Nearest Neighbour (k-NN) classifier is used to classify and recognize the test image by comparing its feature with each of the training set features. The HGWLDA approach is robust against illumination conditions as the Gabor features are illumination invariant. This approach also aims at a better recognition rate using less number of features for varying expressions. The performance of the proposed HGWLDA approaches is evaluated using AT&T database, MIT-India face database and faces94 database. It is found that the proposed HGWLDA approach provides better results than the existing Gabor approach.

Keywords: face recognition, Gabor wavelet, LDA, k-NN classifier

Procedia PDF Downloads 464
2233 Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition

Authors: Aisultan Shoiynbek, Darkhan Kuanyshbay, Paulo Menezes, Akbayan Bekarystankyzy, Assylbek Mukhametzhanov, Temirlan Shoiynbek

Abstract:

Speech emotion recognition (SER) has received increasing research interest in recent years. It is a common practice to utilize emotional speech collected under controlled conditions recorded by actors imitating and artificially producing emotions in front of a microphone. There are four issues related to that approach: emotions are not natural, meaning that machines are learning to recognize fake emotions; emotions are very limited in quantity and poor in variety of speaking; there is some language dependency in SER; consequently, each time researchers want to start work with SER, they need to find a good emotional database in their language. This paper proposes an approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describes the sequence of actions involved in the proposed approach. One of the first objectives in the sequence of actions is the speech detection issue. The paper provides a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To investigate the working capacity of the developed model, an analysis of speech detection and extraction from real tasks has been performed.

Keywords: deep neural networks, speech detection, speech emotion recognition, Mel-frequency cepstrum coefficients, collecting speech emotion corpus, collecting speech emotion dataset, Kazakh speech dataset

Procedia PDF Downloads 21
2232 Physico-Mechanical Behavior of Indian Oil Shales

Authors: K. S. Rao, Ankesh Kumar

Abstract:

The search for alternative energy sources to petroleum has increased these days because of increase in need and depletion of petroleum reserves. Therefore the importance of oil shales as an economically viable substitute has increased many folds in last 20 years. The technologies like hydro-fracturing have opened the field of oil extraction from these unconventional rocks. Oil shale is a compact laminated rock of sedimentary origin containing organic matter known as kerogen which yields oil when distilled. Oil shales are formed from the contemporaneous deposition of fine grained mineral debris and organic degradation products derived from the breakdown of biota. Conditions required for the formation of oil shales include abundant organic productivity, early development of anaerobic conditions, and a lack of destructive organisms. These rocks are not gown through the high temperature and high pressure conditions in Mother Nature. The most common approach for oil extraction is drastically breaking the bond of the organics which involves retorting process. The two approaches for retorting are surface retorting and in-situ processing. The most environmental friendly approach for extraction is In-situ processing. The three steps involved in this process are fracturing, injection to achieve communication, and fluid migration at the underground location. Upon heating (retorting) oil shale at temperatures in the range of 300 to 400°C, the kerogen decomposes into oil, gas and residual carbon in a process referred to as pyrolysis. Therefore it is very important to understand the physico-mechenical behavior of such rocks, to improve the technology for in-situ extraction. It is clear from the past research and the physical observations that these rocks will behave as an anisotropic rock so it is very important to understand the mechanical behavior under high pressure at different orientation angles for the economical use of these resources. By knowing the engineering behavior under above conditions will allow us to simulate the deep ground retorting conditions numerically and experimentally. Many researchers have investigate the effect of organic content on the engineering behavior of oil shale but the coupled effect of organic and inorganic matrix is yet to be analyzed. The favourable characteristics of Assam coal for conversion to liquid fuels have been known for a long time. Studies have indicated that these coals and carbonaceous shale constitute the principal source rocks that have generated the hydrocarbons produced from the region. Rock cores of the representative samples are collected by performing on site drilling, as coring in laboratory is very difficult due to its highly anisotropic nature. Different tests are performed to understand the petrology of these samples, further the chemical analyses are also done to exactly quantify the organic content in these rocks. The mechanical properties of these rocks are investigated by considering different anisotropic angles. Now the results obtained from petrology and chemical analysis are correlated with the mechanical properties. These properties and correlations will further help in increasing the producibility of these rocks. It is well established that the organic content is negatively correlated to tensile strength, compressive strength and modulus of elasticity.

Keywords: oil shale, producibility, hydro-fracturing, kerogen, petrology, mechanical behavior

Procedia PDF Downloads 343
2231 Investigation of Type and Concentration Effects of Solvent on Chemical Properties of Saffron Edible Extract

Authors: Sharareh Mohseni

Abstract:

Purpose: The objective of this study was to find a suitable solvent to produce saffron edible extract with improved chemical properties. Design/methodology/approach: Dried and pulverized stigmas of C. sativus L. (10g) was extracted with 300 ml of solvents including: distillated water (DW), ethanol/DW, methanol/DW, propylene glycol/DW, heptan/DW, and hexan/DW, for 3 days at 25°C and then centrifuged at 3000 rpm. Then the extracts were evaporated using rotary evaporator at 40°C. The fiber and solvent-free extracts were then analyzed by UV spectrophotometer to detect saffron quality parameters including crocin, picrocrocin and safranal. Findings: Distilled water/ethanol mixture as the extraction solvent, caused larger amounts of the plant constituents to diffuse out to the extract compared to other treatments and also control. Polar solvents including distilled water, ethanol, and propylene glycol (except methanol) were more effective in extracting crocin, picrocrocin, and saffranal than non-polar solvents. Social implications: Due to an enhancement of color and flavor, saffron extract is economical compared to natural saffron. Saffron Extract saves on preparation time and reduces the amount of saffron required for imparting the same flavor, as compared to dry saffron. Liquid extract is easier to use and standardize in food preparations compared to dry stamens and can be dosed precisely compared to natural saffron. Originality/value: No research had been done on production of saffron edible extract using the solvent studied in this survey. The novelty of this research is high and the results can be used industrially.

Keywords: Crocus sativus L., saffron extract, solvent extraction, distilled water

Procedia PDF Downloads 444
2230 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis

Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen

Abstract:

The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluate the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.

Keywords: convolutional neural network, electronic medical record, feature representation, lexical semantics, semantic decision

Procedia PDF Downloads 122
2229 Effects of Different Mechanical Treatments on the Physical and Chemical Properties of Turmeric

Authors: Serpa A. M., Gómez Hoyos C., Velásquez-Cock J. A., Ruiz L. F., Vélez Acosta L. M., Gañan P., Zuluaga R.

Abstract:

Turmeric (Curcuma Longa L) is an Indian rhizome known for its biological properties, derived from its active compounds such as curcuminoids. Curcumin, the main polyphenol in turmeric, only represents around 3.5% of the dehydrated rhizome and extraction yields between 41 and 90% have been reported. Therefore, for every 1000 tons of turmeric powder used for the extraction of curcumin, around 970 tons of residues are generated. The present study evaluates the effect of different mechanical treatments (waring blender, grinder and high-pressure homogenization) on the physical and chemical properties of turmeric, as an alternative for the transformation of the entire rhizome. Suspensions of turmeric (10, 20 y 30%) were processed by waring blender during 3 min at 12000 rpm, while the samples treated by grinder were processed evaluating two different Gaps (-1 and -1,5). Finally, the process by high-pressure homogenization, was carried out at 500 bar. According to the results, the luminosity of the samples increases with the severity of the mechanical treatment, due to the stabilization of the color associated with the inactivation of the oxidative enzymes. Additionally, according to the microstructure of the samples, the process by grinder (Gap -1,5) and by high-pressure homogenization allowed the largest size reduction, reaching sizes up to 3 m (measured by optical microscopy). This processes disrupts the cells and breaks their fragments into small suspended particles. The infrared spectra obtained from the samples using an attenuated total reflectance accessory indicates changes in the 800-1200 cm⁻¹ region, related mainly to changes in the starch structure. Finally, the thermogravimetric analysis shows the presence of starch, curcumin and some minerals in the suspensions.

Keywords: characterization, mechanical treatments, suspensions, turmeric rhizome

Procedia PDF Downloads 161
2228 Railway Transport as a Potential Source of Polychlorinated Biphenyls in Soil

Authors: Nataša Stojić, Mira Pucarević, Nebojša Ralević, Vojislava Bursić, Gordan Stojić

Abstract:

Surface soil (0 – 10 cm) samples from 52 sampling sites along the length of railway tracks on the territory of Srem (the western part of the Autonomous Province of Vojvodina, itself part of Serbia) were collected and analyzed for 7 polychlorinated biphenyls (PCBs) in order to see how the distance from the railroad on the one hand and dump on the other hand, affect the concentration of PCBs (CPCBs) in the soil. Samples were taken at a distance of 0.03 to 4.19 km from the railway and 0.43 to 3.35 km from the landfills. For the soil extraction the Soxhlet extraction (USEPA 3540S) was used. The extracts were purified on a silica-gel column (USEPA 3630C). The analysis of the extracts was performed by gas chromatography with tandem mass spectrometry. PCBs were not detected only at two locations. Mean total concentration of PCBs for all other sampling locations was 0,0043 ppm dry weight (dw) with a range of 0,0005 to 0,0227 ppm dw. On the part of the data that were interesting for this research with statistical methods (PCA) were isolated factors that affect the concentration of PCBs. Data were also analyzed using the Pearson's chi-squared test which showed that the hypothesis of independence of CPCBs and distance from the railway can be rejected. Hypothesis of independence between CPCB and the percentage of humus in the soil can also be rejected, in contrast to dependence of CPCB and the distance from the landfill where the hypothesis of independence cannot be rejected. Based on these results can be said that railway transport is a potential source of PCBs. The next step in this research is to establish the position of transformers which are located near sampling sites as another important factor that affects the concentration of PCBs in the soil.

Keywords: GC/MS, landfill, PCB, railway, soil

Procedia PDF Downloads 330
2227 Semantic Indexing Improvement for Textual Documents: Contribution of Classification by Fuzzy Association Rules

Authors: Mohsen Maraoui

Abstract:

In the aim of natural language processing applications improvement, such as information retrieval, machine translation, lexical disambiguation, we focus on statistical approach to semantic indexing for multilingual text documents based on conceptual network formalism. We propose to use this formalism as an indexing language to represent the descriptive concepts and their weighting. These concepts represent the content of the document. Our contribution is based on two steps. In the first step, we propose the extraction of index terms using the multilingual lexical resource Euro WordNet (EWN). In the second step, we pass from the representation of index terms to the representation of index concepts through conceptual network formalism. This network is generated using the EWN resource and pass by a classification step based on association rules model (in attempt to discover the non-taxonomic relations or contextual relations between the concepts of a document). These relations are latent relations buried in the text and carried by the semantic context of the co-occurrence of concepts in the document. Our proposed indexing approach can be applied to text documents in various languages because it is based on a linguistic method adapted to the language through a multilingual thesaurus. Next, we apply the same statistical process regardless of the language in order to extract the significant concepts and their associated weights. We prove that the proposed indexing approach provides encouraging results.

Keywords: concept extraction, conceptual network formalism, fuzzy association rules, multilingual thesaurus, semantic indexing

Procedia PDF Downloads 137
2226 Application of Aquatic Plants for the Remediation of Organochlorine Pesticides from Keenjhar Lake

Authors: Soomal Hamza, Uzma Imran

Abstract:

Organochlorine pesticides bio-accumulate into the fat of fish, birds, and animals through which it enters the human food cycle. Due to their persistence and stability in the environment, many health impacts are associated with them, most of which are carcinogenic in nature. In this study, the level of organochlorine pesticides has been detected in Keenjhar Lake and remediated using Rhizoremediation technique. 14 OC pesticides namely, Aldrin, Deldrin, Heptachlor, Heptachlor epoxide, Endrin, Endosulfun I and II, DDT, DDE, DDD, Alpha, Beta, Gamma BHC and two plants namely, Water Hyacinth and Slvinia Molesta were used in the system using pot experiment which processed for 11 days. A consortium was inoculated in both plants to increase its efficiency. Water samples were processed using liquide-liquid extraction. Sediments and roots samples were processed using Soxhlet method followed by clean-up and Gas Chromatography. Delta-BHC was the predominantly found in all samples with mean concentration (ppb) and standard deviation of 0.02 ± 0.14, 0.52 ± 0.68, 0.61 ± 0.06, in Water, Sediments and Roots samples respectively. The highest levels were of Endosulfan II in the samples of water, sediments and roots. Water Hyacinth proved to be better bioaccumulaor as compared to Silvinia Molesta. The pattern of compounds reduction rate by the end of experiment was Delta-BHC>DDD > Alpha-BHC > DDT> Heptachlor> H.Epoxide> Deldrin> Aldrin> Endrin> DDE> Endosulfun I > Endosulfun II. Not much significant difference was observed between the pots with the consortium and pots without the consortium addition. Phytoremediation is a promising technique, but more studies are required to assess the bioremediation potential of different aquatic plants and plant-endophyte relationship.

Keywords: aquatic plant, bio remediation, gas chromatography, liquid liquid extraction

Procedia PDF Downloads 144
2225 The Study of Spray Drying Process for Skimmed Coconut Milk

Authors: Jaruwan Duangchuen, Siwalak Pathaveerat

Abstract:

Coconut (Cocos nucifera) belongs to the family Arecaceae. Coconut juice and meat are consumed as food and dessert in several regions of the world. Coconut juice contains low proteins, and arginine is the main amino acid content. Coconut meat is the endosperm of coconut that has nutritional value. It composes of carbohydrate, protein and fat. The objective of this study is utilization of by-products from the virgin coconut oil extraction process by using the skimmed coconut milk as a powder. The skimmed coconut milk was separated from the coconut milk in virgin coconut oil extraction process that consists approximately of protein 6.4%, carbohydrate 7.2%, dietary fiber 0.27 %, sugar 6.27%, fat 3.6 % and moisture content of 86.93%. This skimmed coconut milk can be made to powder for value - added product by using spray drying. The factors effect to the yield and properties of dry skimmed coconut milk in spraying process are inlet, outlet air temperature and the maltodextrin concentration. The percentage of maltodextrin content (15, 20%), outlet air temperature (80 ºC, 85 ºC, 90 ºC) and inlet air temperature (190 ºC, 200 ºC, 210 ºC) were conducted to the skimmed coconut milk spray drying process. The spray dryer was kept air flow rate (0.2698 m3 /s). The result that shown 2.22 -3.23% of moisture content, solubility, bulk density (0.4-0.67g/mL), solubility, wettability (4.04 -19.25 min) for solubility in the water, color, particle size were analyzed for the powder samples. The maximum yield (18.00%) of spray dried coconut milk powder was obtained at 210 °C of temperature, 80°C of outlet temperature and 20% maltodextrin for 27.27 second for drying time. For the amino analysis shown that the high amino acids are Glutamine (16.28%), Arginine (10.32%) and Glycerin (9.59%) by using HPLP method (UV detector).

Keywords: skimmed coconut milk, spray drying, virgin coconut oil process (VCO), maltodextrin

Procedia PDF Downloads 328
2224 Effect of Solvents in the Extraction and Stability of Anthocyanin from the Petals of Caesalpinia pulcherrima for Natural Dye-Sensitized Solar Cell

Authors: N. Prabavathy, R. Balasundaraprabhu, S. Shalini, Dhayalan Velauthapillai, S. Prasanna, N. Muthukumarasamy

Abstract:

Dye sensitized solar cell (DSSC) has become a significant research area due to their fundamental and scientific importance in the area of energy conversion. Synthetic dyes as sensitizer in DSSC are efficient and durable but they are costlier, toxic and have the tendency to degrade. Natural sensitizers contain plant pigments such as anthocyanin, carotenoid, flavonoid, and chlorophyll which promote light absorption as well as injection of charges to the conduction band of TiO2 through the sensitizer. But, the efficiency of natural dyes is not up to the mark mainly due to instability of the pigment such as anthocyanin. The stability issues in vitro are mainly due to the effect of solvents on extraction of anthocyanins and their respective pH. Taking this factor into consideration, in the present work, the anthocyanins were extracted from the flower Caesalpinia pulcherrima (C. pulcherrimma) with various solvents and their respective stability and pH values are discussed. The usage of citric acid as solvent to extract anthocyanin has shown good stability than other solvents. It also helps in enhancing the sensitization properties of anthocyanins with Titanium dioxide (TiO2) nanorods. The IPCE spectra show higher photovoltaic performance for dye sensitized TiO2nanorods using citric acid as solvent. The natural DSSC using citric acid as solvent shows a higher efficiency compared to other solvents. Hence citric acid performs to be a safe solvent for natural DSSC in boosting the photovoltaic performance and maintaining the stability of anthocyanins.

Keywords: Caesalpinia pulcherrima, citric acid, dye sensitized solar cells, TiO₂ nanorods

Procedia PDF Downloads 284