Search results for: extracting numerals
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 394

Search results for: extracting numerals

124 Chemical Study of Volatile Organic Compounds (VOCS) from Xylopia aromatica (LAM.) Mart (Annonaceae)

Authors: Vanessa G. P. Severino, JOÃO Gabriel M. Junqueira, Michelle N. G. do Nascimento, Francisco W. B. Aquino, João B. Fernandes, Ana P. Terezan

Abstract:

The scientific interest in analyzing VOCs represents a significant modern research field as a result of importance in most branches of the present life and industry. Therefore it is extremely important to investigate, identify and isolate volatile substances, since they can be used in different areas, such as food, medicine, cosmetics, perfumery, aromatherapy, pesticides, repellents and other household products through methods for extracting volatile constituents, such as solid phase microextraction (SPME), hydrodistillation (HD), solvent extraction (SE), Soxhlet extraction, supercritical fluid extraction (SFE), stream distillation (SD) and vacuum distillation (VD). The Chemometrics is an area of chemistry that uses statistical and mathematical tools for the planning and optimization of the experimental conditions, and to extract relevant chemical information multivariate chemical data. In this context, the focus of this work was the study of the chemical VOCs by SPME of the specie X. aromatica, in search of constituents that can be used in the industrial sector as well as in food, cosmetics and perfumery, since these areas industrial has a considerable role. In addition, by chemometric analysis, we sought to maximize the answers of this research, in order to search for the largest number of compounds. The investigation of flowers from X. aromatica in vitro and in alive mode proved consistent, but certain factors supposed influence the composition of metabolites, and the chemometric analysis strengthened the analysis. Thus, the study of the chemical composition of X. aromatica contributed to the VOCs knowledge of the species and a possible application.

Keywords: chemometrics, flowers, HS-SPME, Xylopia aromatica

Procedia PDF Downloads 329
123 COVID_ICU_BERT: A Fine-Tuned Language Model for COVID-19 Intensive Care Unit Clinical Notes

Authors: Shahad Nagoor, Lucy Hederman, Kevin Koidl, Annalina Caputo

Abstract:

Doctors’ notes reflect their impressions, attitudes, clinical sense, and opinions about patients’ conditions and progress, and other information that is essential for doctors’ daily clinical decisions. Despite their value, clinical notes are insufficiently researched within the language processing community. Automatically extracting information from unstructured text data is known to be a difficult task as opposed to dealing with structured information such as vital physiological signs, images, and laboratory results. The aim of this research is to investigate how Natural Language Processing (NLP) techniques and machine learning techniques applied to clinician notes can assist in doctors’ decision-making in Intensive Care Unit (ICU) for coronavirus disease 2019 (COVID-19) patients. The hypothesis is that clinical outcomes like survival or mortality can be useful in influencing the judgement of clinical sentiment in ICU clinical notes. This paper introduces two contributions: first, we introduce COVID_ICU_BERT, a fine-tuned version of clinical transformer models that can reliably predict clinical sentiment for notes of COVID patients in the ICU. We train the model on clinical notes for COVID-19 patients, a type of notes that were not previously seen by clinicalBERT, and Bio_Discharge_Summary_BERT. The model, which was based on clinicalBERT achieves higher predictive accuracy (Acc 93.33%, AUC 0.98, and precision 0.96 ). Second, we perform data augmentation using clinical contextual word embedding that is based on a pre-trained clinical model to balance the samples in each class in the data (survived vs. deceased patients). Data augmentation improves the accuracy of prediction slightly (Acc 96.67%, AUC 0.98, and precision 0.92 ).

Keywords: BERT fine-tuning, clinical sentiment, COVID-19, data augmentation

Procedia PDF Downloads 167
122 Fuzzy Control of Thermally Isolated Greenhouse Building by Utilizing Underground Heat Exchanger and Outside Weather Conditions

Authors: Raghad Alhusari, Farag Omar, Moustafa Fadel

Abstract:

A traditional greenhouse is a metal frame agricultural building used for cultivation plants in a controlled environment isolated from external climatic changes. Using greenhouses in agriculture is an efficient way to reduce the water consumption, where agriculture field is considered the biggest water consumer world widely. Controlling greenhouse environment yields better productivity of plants but demands an increase of electric power. Although various control approaches have been used towards greenhouse automation, most of them are applied to traditional greenhouses with ventilation fans and/or evaporation cooling system. Such approaches are still demanding high energy and water consumption. The aim of this research is to develop a fuzzy control system that minimizes water and energy consumption by utilizing outside weather conditions and underground heat exchanger to maintain the optimum climate of the greenhouse. The proposed control system is implemented on an experimental model of thermally isolated greenhouse structure with dimensions of 6x5x2.8 meters. It uses fans for extracting heat from the ground heat exchanger system, motors for automatic open/close of the greenhouse windows and LED as lighting system. The controller is integrated also with environmental condition sensors. It was found that using the air-to-air horizontal ground heat exchanger with 90 mm diameter and 2 mm thickness placed 2.5 m below the ground surface results in decreasing the greenhouse temperature of 3.28 ˚C which saves around 3 kW of consumed energy. It also eliminated the water consumption needed in evaporation cooling systems which are traditionally used for cooling the greenhouse environment.

Keywords: automation, earth-to-air heat exchangers, fuzzy control, greenhouse, sustainable buildings

Procedia PDF Downloads 104
121 Fake News Detection Based on Fusion of Domain Knowledge and Expert Knowledge

Authors: Yulan Wu

Abstract:

The spread of fake news on social media has posed significant societal harm to the public and the nation, with its threats spanning various domains, including politics, economics, health, and more. News on social media often covers multiple domains, and existing models studied by researchers and relevant organizations often perform well on datasets from a single domain. However, when these methods are applied to social platforms with news spanning multiple domains, their performance significantly deteriorates. Existing research has attempted to enhance the detection performance of multi-domain datasets by adding single-domain labels to the data. However, these methods overlook the fact that a news article typically belongs to multiple domains, leading to the loss of domain knowledge information contained within the news text. To address this issue, research has found that news records in different domains often use different vocabularies to describe their content. In this paper, we propose a fake news detection framework that combines domain knowledge and expert knowledge. Firstly, it utilizes an unsupervised domain discovery module to generate a low-dimensional vector for each news article, representing domain embeddings, which can retain multi-domain knowledge of the news content. Then, a feature extraction module uses the domain embeddings discovered through unsupervised domain knowledge to guide multiple experts in extracting news knowledge for the total feature representation. Finally, a classifier is used to determine whether the news is fake or not. Experiments show that this approach can improve multi-domain fake news detection performance while reducing the cost of manually labeling domain labels.

Keywords: fake news, deep learning, natural language processing, multiple domains

Procedia PDF Downloads 37
120 Remote Sensing and GIS-Based Environmental Monitoring by Extracting Land Surface Temperature of Abbottabad, Pakistan

Authors: Malik Abid Hussain Khokhar, Muhammad Adnan Tahir, Hisham Bin Hafeez Awan

Abstract:

Continuous environmental determinism and climatic change in the entire globe due to increasing land surface temperature (LST) has become a vital phenomenon nowadays. LST is accelerating because of increasing greenhouse gases in the environment which results of melting down ice caps, ice sheets and glaciers. It has not only worse effects on vegetation and water bodies of the region but has also severe impacts on monsoon areas in the form of capricious rainfall and monsoon failure extensive precipitation. Environment can be monitored with the help of various geographic information systems (GIS) based algorithms i.e. SC (Single), DA (Dual Angle), Mao, Sobrino and SW (Split Window). Estimation of LST is very much possible from digital image processing of satellite imagery. This paper will encompass extraction of LST of Abbottabad using SW technique of GIS and Remote Sensing over last ten years by means of Landsat 7 ETM+ (Environmental Thematic Mapper) and Landsat 8 vide their Thermal Infrared (TIR Sensor) and Optical Land Imager (OLI sensor less Landsat 7 ETM+) having 100 m TIR resolution and 30 m Spectral Resolutions. These sensors have two TIR bands each; their emissivity and spectral radiance will be used as input statistics in SW algorithm for LST extraction. Emissivity will be derived from Normalized Difference Vegetation Index (NDVI) threshold methods using 2-5 bands of OLI with the help of e-cognition software, and spectral radiance will be extracted TIR Bands (Band 10-11 and Band 6 of Landsat 7 ETM+). Accuracy of results will be evaluated by weather data as well. The successive research will have a significant role for all tires of governing bodies related to climate change departments.

Keywords: environment, Landsat 8, SW Algorithm, TIR

Procedia PDF Downloads 330
119 Understanding Profit Shifting by Multinationals in the Context of Cross-Border M&A: A Methodological Exploration

Authors: Michal Friedrich

Abstract:

Cross-border investment has never been easier than in today’s global economy. Despite recent initiatives tightening the international tax landscape, profit shifting and tax optimization by multinational entities (MNEs) in the context of cross-border M&A remain persistent and complex phenomena that warrant in-depth exploration. By synthesizing the outcomes of existing research, this study aims to first provide a methodological framework for identifying MNEs’ profit-shifting behavior and quantifying its fiscal impacts via various macroeconomic and microeconomic approaches. The study also proposes additional methods and qualitative/quantitative measures for extracting insight into the profit shifting behavior of MNEs in the context of their M&A activities at industry and entity levels. To develop the proposed methods, this study applies the knowledge of international tax laws and known profit shifting conduits (incl. dividends, interest, and royalties) on several model cases/types of cross-border acquisitions and post-acquisition integration activities by MNEs and highlights important factors that encourage or discourage tax optimization. Follow-up research is envisaged to apply the methods outlined in this study on published data on real-world M&A transactions to gain practical country-by-country, industry and entity-level insights. In conclusion, this study seeks to contribute to the ongoing discourse on profit shifting by providing a methodological toolkit for exploring profit shifting tendencies MNEs in connection with their M&A activities and to serve as a backbone for further research. The study is expected to provide valuable insight to policymakers, tax authorities, and tax professionals alike.

Keywords: BEPS, cross-border M&A, international taxation, profit shifting, tax optimization

Procedia PDF Downloads 42
118 Phenolic Compounds and Antioxidant Capacity of Nine Genotypes of Thai Rice (Oryza sativa L.)

Authors: Pitchaon Maisuthisakul, Ladawan Changchub

Abstract:

Rice (Oryza sativa L.) is a staple diet in Thailand. Rice cultivation is traditional occupation of Thailand which passed down through generations. The 1 Rai 1 san project is new agricultural theory according to sufficient economy using green technology without using chemical substances. This study was conducted to evaluate total phenolics using HPLC and colorimetric methods including total anthocyanin content of Thai rice extracting by simulated gastric and intestinal condition and to estimate antioxidant capacity using DPPH and thiocyanate methods. Color and visible spectrum of rice grains were also investigated. Rice grains were classified into three groups according to their color appearance. The light brown grain genotypes are Sin Lek, Jasmine 105, Lao Tek and Hawm Ubon. The red group is Sang Yod and Red Jasmine. Genotypes Kum, Hawm Kanya and Hawm Nil are black rice grains. Cyanidin-3-O-glucoside was found in only black rice genotypes, whereas chlorogenic acid was found in all rice grains. The black rice had higher phenolic content than red and light brown samples. Phenolic acids constitute a small portion of phenolic compounds after digestion in human and contribute to the antioxidant activity of Thai rice grains. Anthocyanin contents of all rice extracts ranged from 45.9 to 442.1 mg CGE/kg. All rice extracts showed the antioxidant efficiency lower than ferulic acid. Genotype Kum and Hawm nil exhibited the ability of antioxidant efficiency higher than α-tocopherol. Interestingly, the visible spectrum of only black rice genotypes showed the maximum peak at 530-540 nm. The results suggest that consumption of black rice gives more health benefits of grain to consumer.

Keywords: rice, phenolic, antioxidant, anthocyanin

Procedia PDF Downloads 319
117 Bio Energy from Metabolic Activity of Bacteria in Plant and Soil Using Novel Microbial Fuel Cells

Authors: B. Samuel Raj, Solomon R. D. Jebakumar

Abstract:

Microbial fuel cells (MFCs) are an emerging and promising method for achieving sustainable energy since they can remove contaminated organic matter and simultaneously generate electricity. Our approach was driven in three different ways like Bacterial fuel cell, Soil Microbial fuel cell (Soil MFC) and Plant Microbial fuel cell (Plant MFC). Bacterial MFC: Sulphate reducing bacteria (SRB) were isolated and identified as the efficient electricigens which is able to produce ±2.5V (689mW/m2) and it has sustainable activity for 120 days. Experimental data with different MFC revealed that high electricity production harvested continuously for 90 days 1.45V (381mW/m2), 1.98V (456mW/m2) respectively. Biofilm formation was confirmed on the surface of the anode by high content screening (HCS) and scanning electron Microscopic analysis (SEM). Soil MFC: Soil MFC was constructed with low cost and standard Mudwatt soil MFC was purchased from keegotech (USA). Vermicompost soil (V1) produce high energy (± 3.5V for ± 400 days) compared to Agricultural soil (A1) (± 2V for ± 150 days). Biofilm formation was confirmed by HCS and SEM analysis. This finding provides a method for extracting energy from organic matter, but also suggests a strategy for promoting the bioremediation of organic contaminants in subsurface environments. Our Soil MFC were able to run successfully a 3.5V fan and three LED continuously for 150 days. Plant MFC: Amaranthus candatus (P1) and Triticum aestivium (P2) were used in Plant MFC to confirm the electricity production from plant associated microbes, four uniform size of Plant MFC were constructed and checked for energy production. P2 produce high energy (± 3.2V for 40 days) with harvesting interval of two times and P1 produces moderate energy without harvesting interval (±1.5V for 24 days). P2 is able run 3.5V fan continuously for 10days whereas P1 needs optimization of growth conditions to produce high energy.

Keywords: microbial fuel cell, biofilm, soil microbial fuel cell, plant microbial fuel cell

Procedia PDF Downloads 310
116 Reducing the Imbalance Penalty Through Artificial Intelligence Methods Geothermal Production Forecasting: A Case Study for Turkey

Authors: Hayriye Anıl, Görkem Kar

Abstract:

In addition to being rich in renewable energy resources, Turkey is one of the countries that promise potential in geothermal energy production with its high installed power, cheapness, and sustainability. Increasing imbalance penalties become an economic burden for organizations since geothermal generation plants cannot maintain the balance of supply and demand due to the inadequacy of the production forecasts given in the day-ahead market. A better production forecast reduces the imbalance penalties of market participants and provides a better imbalance in the day ahead market. In this study, using machine learning, deep learning, and, time series methods, the total generation of the power plants belonging to Zorlu Natural Electricity Generation, which has a high installed capacity in terms of geothermal, was estimated for the first one and two weeks of March, then the imbalance penalties were calculated with these estimates and compared with the real values. These modeling operations were carried out on two datasets, the basic dataset and the dataset created by extracting new features from this dataset with the feature engineering method. According to the results, Support Vector Regression from traditional machine learning models outperformed other models and exhibited the best performance. In addition, the estimation results in the feature engineering dataset showed lower error rates than the basic dataset. It has been concluded that the estimated imbalance penalty calculated for the selected organization is lower than the actual imbalance penalty, optimum and profitable accounts.

Keywords: machine learning, deep learning, time series models, feature engineering, geothermal energy production forecasting

Procedia PDF Downloads 76
115 Analysis of Enhanced Built-up and Bare Land Index in the Urban Area of Yangon, Myanmar

Authors: Su Nandar Tin, Wutjanun Muttitanon

Abstract:

The availability of free global and historical satellite imagery provides a valuable opportunity for mapping and monitoring the year by year for the built-up area, constantly and effectively. Land distribution guidelines and identification of changes are important in preparing and reviewing changes in the ground overview data. This study utilizes Landsat images for thirty years of information to acquire significant, and land spread data that are extremely valuable for urban arranging. This paper is mainly introducing to focus the basic of extracting built-up area for the city development area from the satellite images of LANDSAT 5,7,8 and Sentinel 2A from USGS in every five years. The purpose analyses the changing of the urban built-up area according to the year by year and to get the accuracy of mapping built-up and bare land areas in studying the trend of urban built-up changes the periods from 1990 to 2020. The GIS tools such as raster calculator and built-up area modelling are using in this study and then calculating the indices, which include enhanced built-up and bareness index (EBBI), Normalized difference Built-up index (NDBI), Urban index (UI), Built-up index (BUI) and Normalized difference bareness index (NDBAI) are used to get the high accuracy urban built-up area. Therefore, this study will point out a variable approach to automatically mapping typical enhanced built-up and bare land changes (EBBI) with simple indices and according to the outputs of indexes. Therefore, the percentage of the outputs of enhanced built-up and bareness index (EBBI) of the sentinel-2A can be realized with 48.4% of accuracy than the other index of Landsat images which are 15.6% in 1990 where there is increasing urban expansion area from 43.6% in 1990 to 92.5% in 2020 on the study area for last thirty years.

Keywords: built-up area, EBBI, NDBI, NDBAI, urban index

Procedia PDF Downloads 124
114 Optimization and Validation for Determination of VOCs from Lime Fruit Citrus aurantifolia (Christm.) with and without California Red Scale Aonidiella aurantii (Maskell) Infested by Using HS-SPME-GC-FID/MS

Authors: K. Mohammed, M. Agarwal, J. Mewman, Y. Ren

Abstract:

An optimum technic has been developed for extracting volatile organic compounds which contribute to the aroma of lime fruit (Citrus aurantifolia). The volatile organic compounds of healthy and infested lime fruit with California red scale Aonidiella aurantii were characterized using headspace solid phase microextraction (HS-SPME) combined with gas chromatography (GC) coupled flame ionization detection (FID) and gas chromatography with mass spectrometry (GC-MS) as a very simple, efficient and nondestructive extraction method. A three-phase 50/30 μm PDV/DVB/CAR fibre was used for the extraction process. The optimal sealing and fibre exposure time for volatiles reaching equilibrium from whole lime fruit in the headspace of the chamber was 16 and 4 hours respectively. 5 min was selected as desorption time of the three-phase fibre. Herbivorous activity induces indirect plant defenses, as the emission of herbivorous-induced plant volatiles (HIPVs), which could be used by natural enemies for host location. GC-MS analysis showed qualitative differences among volatiles emitted by infested and healthy lime fruit. The GC-MS analysis allowed the initial identification of 18 compounds, with similarities higher than 85%, in accordance with the NIST mass spectral library. One of these were increased by A. aurantii infestation, D-limonene, and three were decreased, Undecane, α-Farnesene and 7-epi-α-selinene. From an applied point of view, the application of the above-mentioned VOCs may help boost the efficiency of biocontrol programs and natural enemies’ production techniques.

Keywords: lime fruit, Citrus aurantifolia, California red scale, Aonidiella aurantii, VOCs, HS-SPME/GC-FID-MS

Procedia PDF Downloads 181
113 Computation of Residual Stresses in Human Face Due to Growth

Authors: M. A. Askari, M. A. Nazari, P. Perrier, Y. Payan

Abstract:

Growth and remodeling of biological structures have gained lots of attention over the past decades. Determining the response of the living tissues to the mechanical loads is necessary for a wide range of developing fields such as, designing of prosthetics and optimized surgery operations. It is a well-known fact that biological structures are never stress-free, even when externally unloaded. The exact origin of these residual stresses is not clear, but theoretically growth and remodeling is one of the main sources. Extracting body organs from medical imaging, does not produce any information regarding the existing residual stresses in that organ. The simplest cause of such stresses is the gravity since an organ grows under its influence from its birth. Ignoring such residual stresses might cause erroneous results in numerical simulations. Accounting for residual stresses due to tissue growth can improve the accuracy of mechanical analysis results. In this paper, we have implemented a computational framework based on fixed-point iteration to determine the residual stresses due to growth. Using nonlinear continuum mechanics and the concept of fictitious configuration we find the unknown stress-free reference configuration which is necessary for mechanical analysis. To illustrate the method, we apply it to a finite element model of healthy human face whose geometry has been extracted from medical images. We have computed the distribution of residual stress in facial tissues, which can overcome the effect of gravity and cause that tissues remain firm. Tissue wrinkles caused by aging could be a consequence of decreasing residual stress and not counteracting the gravity. Considering these stresses has important application in maxillofacial surgery. It helps the surgeons to predict the changes after surgical operations and their consequences.

Keywords: growth, soft tissue, residual stress, finite element method

Procedia PDF Downloads 323
112 Rheological Assessment of Oil Well Cement Paste Dosed with Cellulose Nanocrystal (CNC)

Authors: Mohammad Reza Dousti, Yaman Boluk, Vivek Bindiganavile

Abstract:

During the past few decades, oil and natural gas consumption have increased significantly. The limited amount of hydrocarbon resources on earth has led to a stronger desire towards efficient drilling, well completion and extracting, with the least time, energy and money wasted. Well cementing is one of the most crucial and important steps in any well completion, to fill the annulus between the casing string and the well bore. However, since it takes place at the end of the drilling process, a satisfying and acceptable job is rarely done. Hence, a large and significant amount of time and energy is then spent in order to do the required corrections or retrofitting the well in some cases. Oil well cement paste needs to be pumped during the cementing process, therefore the rheological and flow behavior of the paste is of great importance. This study examines the use of innovative cellulose-based nanomaterials on the flow properties of the resulting cementitious system. The cementitious paste developed in this research is composed of water, class G oil well cement, bentonite and cellulose nanocrystals (CNC). Bentonite is used as a cross contamination component. Initially, the influence of CNC on the flow and rheological behavior of CNC and bentonite suspensions was assessed. Furthermore, the rheological behavior of oil well cement pastes dosed with CNC was studied using a steady shear parallel-plate rheometer and the results were compared to the rheological behavior of a neat oil well cement paste with no CNC. The parameters assessed were the yield shear stress and the viscosity. Significant changes in yield shear stress and viscosity were observed due to the addition of the CNC. Based on the findings in this study, the addition of a very small dosage of CNC to the oil well cement paste results in a more viscous cement slurry with a higher yield stress, demonstrating a shear thinning behavior.

Keywords: cellulose nanocrystal, flow behavior, oil well cement, rheology

Procedia PDF Downloads 196
111 Optimization of Process Parameters for Copper Extraction from Wastewater Treatment Sludge by Sulfuric Acid

Authors: Usarat Thawornchaisit, Kamalasiri Juthaisong, Kasama Parsongjeen, Phonsiri Phoengchan

Abstract:

In this study, sludge samples that were collected from the wastewater treatment plant of a printed circuit board manufacturing industry in Thailand were subjected to acid extraction using sulfuric acid as the chemical extracting agent. The effects of sulfuric acid concentration (A), the ratio of a volume of acid to a quantity of sludge (B) and extraction time (C) on the efficiency of copper extraction were investigated with the aim of finding the optimal conditions for maximum removal of copper from the wastewater treatment sludge. Factorial experimental design was employed to model the copper extraction process. The results were analyzed statistically using analysis of variance to identify the process variables that were significantly affected the copper extraction efficiency. Results showed that all linear terms and an interaction term between volume of acid to quantity of sludge ratio and extraction time (BC), had statistically significant influence on the efficiency of copper extraction under tested conditions in which the most significant effect was ascribed to volume of acid to quantity of sludge ratio (B), followed by sulfuric acid concentration (A), extraction time (C) and interaction term of BC, respectively. The remaining two-way interaction terms, (AB, AC) and the three-way interaction term (ABC) is not statistically significant at the significance level of 0.05. The model equation was derived for the copper extraction process and the optimization of the process was performed using a multiple response method called desirability (D) function to optimize the extraction parameters by targeting maximum removal. The optimum extraction conditions of 99% of copper were found to be sulfuric acid concentration: 0.9 M, ratio of the volume of acid (mL) to the quantity of sludge (g) at 100:1 with an extraction time of 80 min. Experiments under the optimized conditions have been carried out to validate the accuracy of the Model.

Keywords: acid treatment, chemical extraction, sludge, waste management

Procedia PDF Downloads 172
110 The Role of Urban Agriculture in Enhancing Food Supply and Export Potential: A Case Study of Neishabour, Iran

Authors: Mohammadreza Mojtahedi

Abstract:

Rapid urbanization presents multifaceted challenges, including environmental degradation and public health concerns. As the inevitability of urban sprawl continues, it becomes essential to devise strategies to alleviate its pressures on natural ecosystems and elevate socio-economic benchmarks within cities. This research investigates urban agriculture's economic contributions, emphasizing its pivotal role in food provisioning and export potential. Adopting a descriptive-analytical approach, field survey data was primarily collected via questionnaires. The tool's validity was affirmed by expert opinions, and its reliability secured by achieving a Cronbach's alpha score over 0.70 from 30 preliminary questionnaires. The research encompasses Neishabour's populace of 264,375, extracting a sample size of 384 via Cochran's formula. Findings reveal the significance of urban agriculture in food supply and its potential for exports, underlined by a p-value < 0.05. Neishabour's urban farming can augment the export of organic commodities, fruits, vegetables, ornamental plants, and foster product branding. Moreover, it supports the provision of fresh produce, bolstering dietary quality. Urban agriculture further impacts urban development metrics—enhancing environmental quality, job opportunities, income levels, and aesthetics, while promoting rainwater utilization. Popular cultivations include peaches, Damask roses, and poultry, tailored to available spaces. Structural equation modeling indicates urban agriculture's overarching influence, accounting for a 56% variance, predominantly in food sufficiency and export proficiency.

Keywords: urban agriculture, food supply, export potential, urban development, environmental health, structural equation modeling

Procedia PDF Downloads 30
109 A Study on the Different Components of a Typical Back-Scattered Chipless RFID Tag Reflection

Authors: Fatemeh Babaeian, Nemai Chandra Karmakar

Abstract:

Chipless RFID system is a wireless system for tracking and identification which use passive tags for encoding data. The advantage of using chipless RFID tag is having a planar tag which is printable on different low-cost materials like paper and plastic. The printed tag can be attached to different items in the labelling level. Since the price of chipless RFID tag can be as low as a fraction of a cent, this technology has the potential to compete with the conventional optical barcode labels. However, due to the passive structure of the tag, data processing of the reflection signal is a crucial challenge. The captured reflected signal from a tag attached to an item consists of different components which are the reflection from the reader antenna, the reflection from the item, the tag structural mode RCS component and the antenna mode RCS of the tag. All these components are summed up in both time and frequency domains. The effect of reflection from the item and the structural mode RCS component can distort/saturate the frequency domain signal and cause difficulties in extracting the desired component which is the antenna mode RCS. Therefore, it is required to study the reflection of the tag in both time and frequency domains to have a better understanding of the nature of the captured chipless RFID signal. The other benefits of this study can be to find an optimised encoding technique in tag design level and to find the best processing algorithm the chipless RFID signal in decoding level. In this paper, the reflection from a typical backscattered chipless RFID tag with six resonances is analysed, and different components of the signal are separated in both time and frequency domains. Moreover, the time domain signal corresponding to each resonator of the tag is studied. The data for this processing was captured from simulation in CST Microwave Studio 2017. The outcome of this study is understanding different components of a measured signal in a chipless RFID system and a discovering a research gap which is a need to find an optimum detection algorithm for tag ID extraction.

Keywords: antenna mode RCS, chipless RFID tag, resonance, structural mode RCS

Procedia PDF Downloads 160
108 Effect Of Shading In Evaporatively Cooled Greenhouses In The Mediterranean Region

Authors: Nikolaos Katsoulas, Sofia Faliagka, Athanasios Sapounas

Abstract:

Greenhouse ventilation is an effective way to remove the extra heat from the greenhouse through air exchange between inside and outside when outside air temperature is lower. However, in the Mediterranean areas during summer, most of the day, the outside air temperature reaches values above 25 C; and natural ventilation can not remove the excess heat outside the greenhouse. Shade screens and whitewash are major existing measures used to reduce the greenhouse air temperature during summer by reducing the solar radiation entering the greenhouse. However, the greenhouse air temperature is reduced with a cost in radiation reduction. In addition, due to high air temperature values outside the greenhouse, generally, these systems are not sufficient for extracting the excess energy during sunny summer days and therefore, other cooling methods, such as forced ventilation combined with evaporative cooling, are needed. Evaporative cooling by means of pad and fan or fog systems is a common technique to reduce sensible heat load by increasing the latent heat fraction of dissipated energy. In most of the cases, the greenhouse growers, when all the above systems are available, apply both shading and evaporative cooling. If a movable screen is available, then the screen is usually activated when a certain radiation level is reached. It is not clear whether the shading screens should be used over the growth cycle or only during the most sensitive stages when the crops had a low leaf area and the canopy transpiration rate cannot significantly contribute to the greenhouse cooling. Furthermore, it is not clear which is the optimum radiation level that screen must be activated. This work aims to present the microclimate and cucumber crop physiological response and yield observed in two greenhouse compartments equipped with a pad and fan evaporative cooling system and a thermal/shading screen that is activated at different radiation levels: when the outside solar radiation reaches 700 or 900 W/m2. The greenhouse is located in Velestino, in Central Greece and the measurements are performed during the spring -summer period with the outside air temperature during summer reaching values up to 42C.

Keywords: microclimate, shading, screen, pad and fan, cooling

Procedia PDF Downloads 36
107 Deproteinization of Moroccan Sardine (Sardina pilchardus) Scales: A Pilot-Scale Study

Authors: F. Bellali, M. Kharroubi, Y. Rady, N. Bourhim

Abstract:

In Morocco, fish processing industry is an important source income for a large amount of by-products including skins, bones, heads, guts, and scales. Those underutilized resources particularly scales contain a large amount of proteins and calcium. Sardina plichardus scales from resulting from the transformation operation have the potential to be used as raw material for the collagen production. Taking into account this strong expectation of the regional fish industry, scales sardine upgrading is well justified. In addition, political and societal demands for sustainability and environment-friendly industrial production systems, coupled with the depletion of fish resources, drive this trend forward. Therefore, fish scale used as a potential source to isolate collagen has a wide large of applications in food, cosmetic, and biomedical industry. The main aim of this study is to isolate and characterize the acid solubilize collagen from sardine fish scale, Sardina pilchardus. Experimental design methodology was adopted in collagen processing for extracting optimization. The first stage of this work is to investigate the optimization conditions of the sardine scale deproteinization on using response surface methodology (RSM). The second part focus on the demineralization with HCl solution or EDTA. And the last one is to establish the optimum condition for the isolation of collagen from fish scale by solvent extraction. The advancement from lab scale to pilot scale is a critical stage in the technological development. In this study, the optimal condition for the deproteinization which was validated at laboratory scale was employed in the pilot scale procedure. The deproteinization of fish scale was then demonstrated on a pilot scale (2Kg scales, 20l NaOH), resulting in protein content (0,2mg/ml) and hydroxyproline content (2,11mg/l). These results indicated that the pilot-scale showed similar performances to those of lab-scale one.

Keywords: deproteinization, pilot scale, scale, sardine pilchardus

Procedia PDF Downloads 416
106 Conversion of Sweet Sorghum Bagasse to Sugars for Succinic Acid Production

Authors: Enlin Lo, Ioannis Dogaris, George Philippidis

Abstract:

Succinic acid is a compound used for manufacturing lacquers, resins, and other coating chemicals. It is also used in the food and beverage industry as a flavor additive. It is predominantly manufactured from petrochemicals, but it can also be produced by fermentation of sugars from renewable feedstocks, such as plant biomass. Bio-based succinic acid has great potential in becoming a platform chemical (building block) for commodity and high-value chemicals. In this study, the production of bio-based succinic acid from sweet sorghum was investigated. Sweet sorghum has high fermentable sugar content and can be cultivated in a variety of climates. In order to avoid competition with food feedstocks, its non-edible ‘bagasse’ (the fiber part after extracting the juice) was targeted. Initially, various conditions of pretreating sweet sorghum bagasse (SSB) were studied in an effort to remove most of the non-fermentable components and expose the cellulosic fiber containing the fermentable sugars (glucose). Concentrated (83%) phosphoric acid was utilized at temperatures 50-80 oC for 30-60 min at various SSB loadings (10-15%), coupled with enzymatic hydrolysis using commercial cellulase (Ctec2, Novozymes) enzyme, to identify the conditions that lead to the highest glucose yields for subsequent fermentation to succinic acid. As the pretreatment temperature and duration increased, the bagasse color changed from light brown to dark brown-black, indicating decomposition, which ranged from 15% to 72%, while the theoretical glucose yield is 91%. With Minitab software statistical analysis, a model was built to identify the optimal pretreatment condition for maximum glucose released. The projected theoretical bio-based succinic acid production is 23g per 100g of SSB, which will be confirmed with fermentation experiments using the bacterium Actinobacillus succinogenes.

Keywords: biomass, cellulose, enzymatic hydrolysis, fermentation, pretreatment, succinic acid

Procedia PDF Downloads 181
105 Comparison of Different Artificial Intelligence-Based Protein Secondary Structure Prediction Methods

Authors: Jamerson Felipe Pereira Lima, Jeane Cecília Bezerra de Melo

Abstract:

The difficulty and cost related to obtaining of protein tertiary structure information through experimental methods, such as X-ray crystallography or NMR spectroscopy, helped raising the development of computational methods to do so. An approach used in these last is prediction of tridimensional structure based in the residue chain, however, this has been proved an NP-hard problem, due to the complexity of this process, explained by the Levinthal paradox. An alternative solution is the prediction of intermediary structures, such as the secondary structure of the protein. Artificial Intelligence methods, such as Bayesian statistics, artificial neural networks (ANN), support vector machines (SVM), among others, were used to predict protein secondary structure. Due to its good results, artificial neural networks have been used as a standard method to predict protein secondary structure. Recent published methods that use this technique, in general, achieved a Q3 accuracy between 75% and 83%, whereas the theoretical accuracy limit for protein prediction is 88%. Alternatively, to achieve better results, support vector machines prediction methods have been developed. The statistical evaluation of methods that use different AI techniques, such as ANNs and SVMs, for example, is not a trivial problem, since different training sets, validation techniques, as well as other variables can influence the behavior of a prediction method. In this study, we propose a prediction method based on artificial neural networks, which is then compared with a selected SVM method. The chosen SVM protein secondary structure prediction method is the one proposed by Huang in his work Extracting Physico chemical Features to Predict Protein Secondary Structure (2013). The developed ANN method has the same training and testing process that was used by Huang to validate his method, which comprises the use of the CB513 protein data set and three-fold cross-validation, so that the comparative analysis of the results can be made comparing directly the statistical results of each method.

Keywords: artificial neural networks, protein secondary structure, protein structure prediction, support vector machines

Procedia PDF Downloads 583
104 Utilization of Activated Carbon for the Extraction and Separation of Methylene Blue in the Presence of Acid Yellow 61 Using an Inclusion Polymer Membrane

Authors: Saâd Oukkass, Abderrahim Bouftou, Rachid Ouchn, L. Lebrun, Miloudi Hlaibi

Abstract:

We invariably exist in a world steeped in colors, whether in our clothing, food, cosmetics, or even medications. However, most of the dyes we use pose significant problems, being both harmful to the environment and resistant to degradation. Among these dyes, methylene blue and acid yellow 61 stand out, commonly used to dye various materials such as cotton, wood, and silk. Fortunately, various methods have been developed to treat and remove these polluting dyes, among which membrane processes play a prominent role. These methods are praised for their low energy consumption, ease of operation, and their ability to achieve effective separation of components. Adsorption on activated carbon is also a widely employed technique, complementing the basic processes. It proves particularly effective in capturing and removing organic compounds from water due to its substantial specific surface area while retaining its properties unchanged. In the context of our study, we examined two crucial aspects. Firstly, we explored the possibility of selectively extracting methylene blue from a mixture containing another dye, acid yellow 61, using a polymer inclusion membrane (PIM) made of PVA. After characterizing the morphology and porosity of the membrane, we applied kinetic and thermodynamic models to determine the values of permeability (P), initial flux (J0), association constant (Kass), and apparent diffusion coefficient (D*). Subsequently, we measured activation parameters (activation energy (Ea), enthalpy (ΔH#ass), entropy (ΔS#)). Finally, we studied the effect of activated carbon on the processes carried out through the membrane, demonstrating a clear improvement. These results make the membrane developed in this study a potentially pivotal player in the field of membrane separation.

Keywords: dyes, methylene blue, membrane, activated carbon

Procedia PDF Downloads 38
103 Multi-source Question Answering Framework Using Transformers for Attribute Extraction

Authors: Prashanth Pillai, Purnaprajna Mangsuli

Abstract:

Oil exploration and production companies invest considerable time and efforts to extract essential well attributes (like well status, surface, and target coordinates, wellbore depths, event timelines, etc.) from unstructured data sources like technical reports, which are often non-standardized, multimodal, and highly domain-specific by nature. It is also important to consider the context when extracting attribute values from reports that contain information on multiple wells/wellbores. Moreover, semantically similar information may often be depicted in different data syntax representations across multiple pages and document sources. We propose a hierarchical multi-source fact extraction workflow based on a deep learning framework to extract essential well attributes at scale. An information retrieval module based on the transformer architecture was used to rank relevant pages in a document source utilizing the page image embeddings and semantic text embeddings. A question answering framework utilizingLayoutLM transformer was used to extract attribute-value pairs incorporating the text semantics and layout information from top relevant pages in a document. To better handle context while dealing with multi-well reports, we incorporate a dynamic query generation module to resolve ambiguities. The extracted attribute information from various pages and documents are standardized to a common representation using a parser module to facilitate information comparison and aggregation. Finally, we use a probabilistic approach to fuse information extracted from multiple sources into a coherent well record. The applicability of the proposed approach and related performance was studied on several real-life well technical reports.

Keywords: natural language processing, deep learning, transformers, information retrieval

Procedia PDF Downloads 166
102 Efficient Video Compression Technique Using Convolutional Neural Networks and Generative Adversarial Network

Authors: P. Karthick, K. Mahesh

Abstract:

Video has become an increasingly significant component of our digital everyday contact. With the advancement of greater contents and shows of the resolution, its significant volume poses serious obstacles to the objective of receiving, distributing, compressing, and revealing video content of high quality. In this paper, we propose the primary beginning to complete a deep video compression model that jointly upgrades all video compression components. The video compression method involves splitting the video into frames, comparing the images using convolutional neural networks (CNN) to remove duplicates, repeating the single image instead of the duplicate images by recognizing and detecting minute changes using generative adversarial network (GAN) and recorded with long short-term memory (LSTM). Instead of the complete image, the small changes generated using GAN are substituted, which helps in frame level compression. Pixel wise comparison is performed using K-nearest neighbours (KNN) over the frame, clustered with K-means, and singular value decomposition (SVD) is applied for each and every frame in the video for all three color channels [Red, Green, Blue] to decrease the dimension of the utility matrix [R, G, B] by extracting its latent factors. Video frames are packed with parameters with the aid of a codec and converted to video format, and the results are compared with the original video. Repeated experiments on several videos with different sizes, duration, frames per second (FPS), and quality results demonstrate a significant resampling rate. On average, the result produced had approximately a 10% deviation in quality and more than 50% in size when compared with the original video.

Keywords: video compression, K-means clustering, convolutional neural network, generative adversarial network, singular value decomposition, pixel visualization, stochastic gradient descent, frame per second extraction, RGB channel extraction, self-detection and deciding system

Procedia PDF Downloads 161
101 DMBR-Net: Deep Multiple-Resolution Bilateral Networks for Real-Time and Accurate Semantic Segmentation

Authors: Pengfei Meng, Shuangcheng Jia, Qian Li

Abstract:

We proposed a real-time high-precision semantic segmentation network based on a multi-resolution feature fusion module, the auxiliary feature extracting module, upsampling module, and atrous spatial pyramid pooling (ASPP) module. We designed a feature fusion structure, which is integrated with sufficient features of different resolutions. We also studied the effect of side-branch structure on the network and made discoveries. Based on the discoveries about the side-branch of the network structure, we used a side-branch auxiliary feature extraction layer in the network to improve the effectiveness of the network. We also designed upsampling module, which has better results than the original upsampling module. In addition, we also re-considered the locations and number of atrous spatial pyramid pooling (ASPP) modules and modified the network structure according to the experimental results to further improve the effectiveness of the network. The network presented in this paper takes the backbone network of Bisenetv2 as a basic network, based on which we constructed a network structure on which we made improvements. We named this network deep multiple-resolution bilateral networks for real-time, referred to as DMBR-Net. After experimental testing, our proposed DMBR-Net network achieved 81.2% mIoU at 119FPS on the Cityscapes validation dataset, 80.7% mIoU at 109FPS on the CamVid test dataset, 29.9% mIoU at 78FPS on the COCOStuff test dataset. Compared with all lightweight real-time semantic segmentation networks, our network achieves the highest accuracy at an appropriate speed.

Keywords: multi-resolution feature fusion, atrous convolutional, bilateral networks, pyramid pooling

Procedia PDF Downloads 107
100 ARABEX: Automated Dotted Arabic Expiration Date Extraction using Optimized Convolutional Autoencoder and Custom Convolutional Recurrent Neural Network

Authors: Hozaifa Zaki, Ghada Soliman

Abstract:

In this paper, we introduced an approach for Automated Dotted Arabic Expiration Date Extraction using Optimized Convolutional Autoencoder (ARABEX) with bidirectional LSTM. This approach is used for translating the Arabic dot-matrix expiration dates into their corresponding filled-in dates. A custom lightweight Convolutional Recurrent Neural Network (CRNN) model is then employed to extract the expiration dates. Due to the lack of available dataset images for the Arabic dot-matrix expiration date, we generated synthetic images by creating an Arabic dot-matrix True Type Font (TTF) matrix to address this limitation. Our model was trained on a realistic synthetic dataset of 3287 images, covering the period from 2019 to 2027, represented in the format of yyyy/mm/dd. We then trained our custom CRNN model using the generated synthetic images to assess the performance of our model (ARABEX) by extracting expiration dates from the translated images. Our proposed approach achieved an accuracy of 99.4% on the test dataset of 658 images, while also achieving a Structural Similarity Index (SSIM) of 0.46 for image translation on our dataset. The ARABEX approach demonstrates its ability to be applied to various downstream learning tasks, including image translation and reconstruction. Moreover, this pipeline (ARABEX+CRNN) can be seamlessly integrated into automated sorting systems to extract expiry dates and sort products accordingly during the manufacturing stage. By eliminating the need for manual entry of expiration dates, which can be time-consuming and inefficient for merchants, our approach offers significant results in terms of efficiency and accuracy for Arabic dot-matrix expiration date recognition.

Keywords: computer vision, deep learning, image processing, character recognition

Procedia PDF Downloads 48
99 Hedgerow Detection and Characterization Using Very High Spatial Resolution SAR DATA

Authors: Saeid Gharechelou, Stuart Green, Fiona Cawkwell

Abstract:

Hedgerow has an important role for a wide range of ecological habitats, landscape, agriculture management, carbon sequestration, wood production. Hedgerow detection accurately using satellite imagery is a challenging problem in remote sensing techniques, because in the special approach it is very similar to line object like a road, from a spectral viewpoint, a hedge is very similar to a forest. Remote sensors with very high spatial resolution (VHR) recently enable the automatic detection of hedges by the acquisition of images with enough spectral and spatial resolution. Indeed, recently VHR remote sensing data provided the opportunity to detect the hedgerow as line feature but still remain difficulties in monitoring the characterization in landscape scale. In this research is used the TerraSAR-x Spotlight and Staring mode with 3-5 m resolution in wet and dry season in the test site of Fermoy County, Ireland to detect the hedgerow by acquisition time of 2014-2015. Both dual polarization of Spotlight data in HH/VV is using for detection of hedgerow. The varied method of SAR image technique with try and error way by integration of classification algorithm like texture analysis, support vector machine, k-means and random forest are using to detect hedgerow and its characterization. We are applying the Shannon entropy (ShE) and backscattering analysis in single and double bounce in polarimetric analysis for processing the object-oriented classification and finally extracting the hedgerow network. The result still is in progress and need to apply the other method as well to find the best method in study area. Finally, this research is under way to ahead to get the best result and here just present the preliminary work that polarimetric image of TSX potentially can detect the hedgerow.

Keywords: TerraSAR-X, hedgerow detection, high resolution SAR image, dual polarization, polarimetric analysis

Procedia PDF Downloads 205
98 Optimization of Horticultural Crops by Using the Peats from Rawa Pening Lake as Soil Conditioner

Authors: Addharu Eri, Ningsih P. Lestari, Setyorini Adheliya, Syaiputri Khaidifah

Abstract:

Rawa Pening is a lake at the Ambarawa Basin in Central Java, Indonesia. It serves as a source of power (hydroelectricity), irrigation, and flood control. The potential of this lake is getting worse by the presence of aquatic plants (Eichhornia crassipes) that grows wild, and it can make the lake covered by the cumulation of rotten E. crassipes. This cumulation causes the sediment formation which has high organic material composition. Sediment formation will be lead into a shallowing of the lake and affect water’s quality. The deposition of organic material produces methane gas and hydrogen sulfide, which in rain would turn the water muddy and decompose. Decomposition occuring in the water due to microbe activity in lake's water. The shallowing of Rawa Pening Lake not only will physically can reduce water discharge, but it also has ecologically major impact on water organism. The condition of Rawa Pening Lake peats can not be considered as unimportant issue. One of the solutions that can be applied is by using the peats as a compound materials on growing horticultural crops because the organic materials content on the mineral soil is low, particularly on an old soils. The horticultural crops required organic materials for growth promoting. The horticultural crops that use in this research is mustard cabbage (Brassica sp.). Using Rawa Pening's peats as the medium of plants with high organic materials that also can ameliorate soil’s physical properties, and indirectly serves as soil conditioner. Research will be focus on the peat’s contents and mustard cabbage product’s content. The contents that will be examined is the N-available, Ca, Mg, K, P, and C-organic. The analysis of Ca, Mg, and K is use soil base saturation measurement method and extracting soil is use NH4OAC solution. The aim of this study is to use the peats of Rawa Pening Lake as soil conditioner and increase the productivity of Brassica sp.

Keywords: Brassica sp., peats, rawa pening lake, soil conditioner

Procedia PDF Downloads 219
97 Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured GNSS-Denied Environments

Authors: David L. Olson, Stephen B. H. Bruder, Adam S. Watkins, Cleon E. Davis

Abstract:

In global navigation satellite systems (GNSS), denied settings such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation, thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.

Keywords: autonomous mobile robotics, dead reckoning, depth camera, inertial navigation, Kalman filtering, localization, sensor fusion

Procedia PDF Downloads 179
96 Insights into Archaeological Human Sample Microbiome Using 16S rRNA Gene Sequencing

Authors: Alisa Kazarina, Guntis Gerhards, Elina Petersone-Gordina, Ilva Pole, Viktorija Igumnova, Janis Kimsis, Valentina Capligina, Renate Ranka

Abstract:

Human body is inhabited by a vast number of microorganisms, collectively known as the human microbiome, and there is a tremendous interest in evolutionary changes in human microbial ecology, diversity and function. The field of paleomicrobiology, study of ancient human microbiome, is powered by modern techniques of Next Generation Sequencing (NGS), which allows extracting microbial genomic data directly from archaeological sample of interest. One of the major techniques is 16S rRNA gene sequencing, by which certain 16S rRNA gene hypervariable regions are being amplified and sequenced. However, some limitations of this method exist including the taxonomic precision and efficacy of different regions used. The aim of this study was to evaluate the phylogenetic sensitivity of different 16S rRNA gene hypervariable regions for microbiome studies in the archaeological samples. Towards this aim, archaeological bone samples and corresponding soil samples from each burial environment were collected in Medieval cemeteries in Latvia. The Ion 16S™ Metagenomics Kit targeting different 16S rRNA gene hypervariable regions was used for library construction (Ion Torrent technologies). Sequenced data were analysed by using appropriate bioinformatic techniques; alignment and taxonomic representation was done using Mothur program. Sequences of most abundant genus were further aligned to E. coli 16S rRNA gene reference sequence using MEGA7 in order to identify the hypervariable region of the segment of interest. Our results showed that different hypervariable regions had different discriminatory power depending on the groups of microbes, as well as the nature of samples. On the basis of our results, we suggest that wider range of primers used can provide more accurate recapitulation of microbial communities in archaeological samples. Acknowledgements. This work was supported by the ERAF grant Nr. 1.1.1.1/16/A/101.

Keywords: 16S rRNA gene, ancient human microbiome, archaeology, bioinformatics, genomics, microbiome, molecular biology, next-generation sequencing

Procedia PDF Downloads 163
95 Extraction of Forest Plantation Resources in Selected Forest of San Manuel, Pangasinan, Philippines Using LiDAR Data for Forest Status Assessment

Authors: Mark Joseph Quinto, Roan Beronilla, Guiller Damian, Eliza Camaso, Ronaldo Alberto

Abstract:

Forest inventories are essential to assess the composition, structure and distribution of forest vegetation that can be used as baseline information for management decisions. Classical forest inventory is labor intensive and time-consuming and sometimes even dangerous. The use of Light Detection and Ranging (LiDAR) in forest inventory would improve and overcome these restrictions. This study was conducted to determine the possibility of using LiDAR derived data in extracting high accuracy forest biophysical parameters and as a non-destructive method for forest status analysis of San Manual, Pangasinan. Forest resources extraction was carried out using LAS tools, GIS, Envi and .bat scripts with the available LiDAR data. The process includes the generation of derivatives such as Digital Terrain Model (DTM), Canopy Height Model (CHM) and Canopy Cover Model (CCM) in .bat scripts followed by the generation of 17 composite bands to be used in the extraction of forest classification covers using ENVI 4.8 and GIS software. The Diameter in Breast Height (DBH), Above Ground Biomass (AGB) and Carbon Stock (CS) were estimated for each classified forest cover and Tree Count Extraction was carried out using GIS. Subsequently, field validation was conducted for accuracy assessment. Results showed that the forest of San Manuel has 73% Forest Cover, which is relatively much higher as compared to the 10% canopy cover requirement. On the extracted canopy height, 80% of the tree’s height ranges from 12 m to 17 m. CS of the three forest covers based on the AGB were: 20819.59 kg/20x20 m for closed broadleaf, 8609.82 kg/20x20 m for broadleaf plantation and 15545.57 kg/20x20m for open broadleaf. Average tree counts for the tree forest plantation was 413 trees/ha. As such, the forest of San Manuel has high percent forest cover and high CS.

Keywords: carbon stock, forest inventory, LiDAR, tree count

Procedia PDF Downloads 350