Search results for: feature extraction multispectral
2704 KSVD-SVM Approach for Spontaneous Facial Expression Recognition
Authors: Dawood Al Chanti, Alice Caplier
Abstract:
Sparse representations of signals have received a great deal of attention in recent years. In this paper, the interest of using sparse representation as a mean for performing sparse discriminative analysis between spontaneous facial expressions is demonstrated. An automatic facial expressions recognition system is presented. It uses a KSVD-SVM approach which is made of three main stages: A pre-processing and feature extraction stage, which solves the problem of shared subspace distribution based on the random projection theory, to obtain low dimensional discriminative and reconstructive features; A dictionary learning and sparse coding stage, which uses the KSVD model to learn discriminative under or over dictionaries for sparse coding; Finally a classification stage, which uses a SVM classifier for facial expressions recognition. Our main concern is to be able to recognize non-basic affective states and non-acted expressions. Extensive experiments on the JAFFE static acted facial expressions database but also on the DynEmo dynamic spontaneous facial expressions database exhibit very good recognition rates.Keywords: dictionary learning, random projection, pose and spontaneous facial expression, sparse representation
Procedia PDF Downloads 3052703 Intrusion Detection System Using Linear Discriminant Analysis
Authors: Zyad Elkhadir, Khalid Chougdali, Mohammed Benattou
Abstract:
Most of the existing intrusion detection systems works on quantitative network traffic data with many irrelevant and redundant features, which makes detection process more time’s consuming and inaccurate. A several feature extraction methods, such as linear discriminant analysis (LDA), have been proposed. However, LDA suffers from the small sample size (SSS) problem which occurs when the number of the training samples is small compared with the samples dimension. Hence, classical LDA cannot be applied directly for high dimensional data such as network traffic data. In this paper, we propose two solutions to solve SSS problem for LDA and apply them to a network IDS. The first method, reduce the original dimension data using principal component analysis (PCA) and then apply LDA. In the second solution, we propose to use the pseudo inverse to avoid singularity of within-class scatter matrix due to SSS problem. After that, the KNN algorithm is used for classification process. We have chosen two known datasets KDDcup99 and NSLKDD for testing the proposed approaches. Results showed that the classification accuracy of (PCA+LDA) method outperforms clearly the pseudo inverse LDA method when we have large training data.Keywords: LDA, Pseudoinverse, PCA, IDS, NSL-KDD, KDDcup99
Procedia PDF Downloads 2262702 Functionality and Application of Rice Bran Protein Hydrolysates in Oil in Water Emulsions: Their Stabilities to Environmental Stresses
Authors: R. Charoen, S. Tipkanon, W. Savedboworn, N. Phonsatta, A. Panya
Abstract:
Rice bran protein hydrolysates (RBPH) were prepared from defatted rice bran of two different Thai rice cultivars (Plai-Ngahm-Prachinburi; PNP and Khao Dok Mali 105; KDM105) using an enzymatic method. This research aimed to optimize enzyme-assisted protein extraction. In addition, the functional properties of RBPH and their stabilities to environmental stresses including pH (3 to 8), ionic strength (0 mM to 500 mM) and the thermal treatment (30 °C to 90 °C) were investigated. Results showed that enzymatic process for protein extraction of defatted rice bran was as follows: enzyme concentration 0.075 g/ 5 g of protein, extraction temperature 50 °C and extraction time 4 h. The obtained protein hydrolysate powders had a degree of hydrolysis (%) of 21.05% in PNP and 19.92% in KDM105. The solubility of protein hydrolysates at pH 4-6 was ranged from 27.28-38.57% and 27.60-43.00% in PNP and KDM105, respectively. In general, antioxidant activities indicated by total phenolic content, FRAP, ferrous ion-chelating (FIC), and 2,2’-azino-bis-3-ethylbenzthiazoline-6-sulphonic acid (ABTS) of KDM105 had higher than PNP. In terms of functional properties, the emulsifying activity index (EAI) was was 8.78 m²/g protein in KDM105, whereas PNP was 5.05 m²/g protein. The foaming capacity at 5 minutes (%) was 47.33 and 52.98 in PNP and KDM105, respectively. Glutamine, Alanine, Valine, and Leucine are the major amino acid in protein hydrolysates where the total amino acid of KDM105 gave higher than PNP. Furthermore, we investigated environmental stresses on the stability of 5% oil in water emulsion (5% oil, 10 mM citrate buffer) stabilized by RBPH (3.5%). The droplet diameter of emulsion stabilized by KDM105 was smaller (d < 250 nm) than produced by PNP. For environmental stresses, RBPH stabilized emulsions were stable at pH around 3 and 5-6, at high salt (< 400 mM, pH 7) and at temperatures range between 30-50°C.Keywords: functional properties, oil in water emulsion, protein hydrolysates, rice bran protein
Procedia PDF Downloads 2182701 Technology Enriched Classroom for Intercultural Competence Building through Films
Authors: Tamara Matevosyan
Abstract:
In this globalized world, intercultural communication is becoming essential for understanding communication among people, for developing understanding of cultures, to appreciate the opportunities and challenges that each culture presents to people. Moreover, it plays an important role in developing an ideal personification to understand different behaviors in different cultures. Native speakers assimilate sociolinguistic knowledge in natural conditions, while it is a great problem for language learners, and in this context feature films reveal cultural peculiarities and involve students in real communication. As we know nowadays the key role of language learning is the development of intercultural competence as communicating with someone from a different cultural background can be exciting and scary, frustrating and enlightening. Intercultural competence is important in FL learning classroom and here feature films can perform as essential tools to develop this competence and overcome the intercultural gap that foreign students face. Current proposal attempts to reveal the correlation of the given culture and language through feature films. To ensure qualified, well-organized and practical classes on Intercultural Communication for language learners a number of methods connected with movie watching have been implemented. All the pre-watching, while watching and post-watching methods and techniques are aimed at developing students’ communicative competence. The application of such activities as Climax, Role-play, Interactive Language, Daily Life helps to reveal and overcome mistakes of cultural and pragmatic character. All the above-mentioned activities are directed at the assimilation of the language vocabulary with special reference to the given culture. The study dwells into the essence of culture as one of the core concepts of intercultural communication. Sometimes culture is not a priority in the process of language learning which leads to further misunderstandings in real life communication. The application of various methods and techniques with feature films aims at developing students’ cultural competence, their understanding of norms and values of individual cultures. Thus, feature film activities will enable learners to enlarge their knowledge of the particular culture and develop a fundamental insight into intercultural communication.Keywords: climax, intercultural competence, interactive language, role-play
Procedia PDF Downloads 3462700 Exploring Selected Nigerian Fictional Work and Films as Sources of Peace Building and Conflict Resolution in the Natural Resource Extraction Regions of Nigeria: A Social Conflict Theoretical Perspective and Analysis
Authors: Joyce Onoromhenre Agofure
Abstract:
Research has shown how fictional work and films reflect the destruction of the environment due to the exploitation of oil, gas, gold, and forest products by multinational companies for profits but overlook discussions on conflict resolution and peacebuilding. However, this paper examines the manner art forms project peace and conflict resolution, thereby contributing to mediation and stability geared towards changing appalling situations in the resource extraction regions of Nigeria. This paper draws from selected Nigerian films- Blood and Oil (2019), directed by Curtis Graham, Black November (2012), directed by Jeta Amata, and a novel- Death of Eternity (2007), by Adamu Kyuka Usman. The study seeks to show that the disruptions caused in the natural resource regions of Nigeria have not only left adverse effects on the social well-being of the people but require resolutions through means of peacebuilding. By adopting the theoretical insights of Social Conflict, this paper focuses on artistic processes that enhance peacebuilding and conflict resolution in non-violent ways by using scenes, visual effects, themes, and images that can educate by shaping opinions, influencing attitudes, and changing ideas and behavioral patterns of individuals and communities. Put together; the research will open up critical perceptions brought about by the artists of study to shed light on the dire need to sustain peace and actively participate in conflict resolution in natural resource extraction spaces.Keywords: natural resource, extraction, conflict resolution, peace building
Procedia PDF Downloads 802699 Contrast Enhancement of Color Images with Color Morphing Approach
Authors: Javed Khan, Aamir Saeed Malik, Nidal Kamel, Sarat Chandra Dass, Azura Mohd Affandi
Abstract:
Low contrast images can result from the wrong setting of image acquisition or poor illumination conditions. Such images may not be visually appealing and can be difficult for feature extraction. Contrast enhancement of color images can be useful in medical area for visual inspection. In this paper, a new technique is proposed to improve the contrast of color images. The RGB (red, green, blue) color image is transformed into normalized RGB color space. Adaptive histogram equalization technique is applied to each of the three channels of normalized RGB color space. The corresponding channels in the original image (low contrast) and that of contrast enhanced image with adaptive histogram equalization (AHE) are morphed together in proper proportions. The proposed technique is tested on seventy color images of acne patients. The results of the proposed technique are analyzed using cumulative variance and contrast improvement factor measures. The results are also compared with decorrelation stretch. Both subjective and quantitative analysis demonstrates that the proposed techniques outperform the other techniques.Keywords: contrast enhacement, normalized RGB, adaptive histogram equalization, cumulative variance.
Procedia PDF Downloads 3782698 Indoor Real-Time Positioning and Mapping Based on Manhattan Hypothesis Optimization
Authors: Linhang Zhu, Hongyu Zhu, Jiahe Liu
Abstract:
This paper investigated a method of indoor real-time positioning and mapping based on the Manhattan world assumption. In indoor environments, relying solely on feature matching techniques or other geometric algorithms for sensor pose estimation inevitably resulted in cumulative errors, posing a significant challenge to indoor positioning. To address this issue, we adopt the Manhattan world hypothesis to optimize the camera pose algorithm based on feature matching, which improves the accuracy of camera pose estimation. A special processing method was applied to image data frames that conformed to the Manhattan world assumption. When similar data frames appeared subsequently, this could be used to eliminate drift in sensor pose estimation, thereby reducing cumulative errors in estimation and optimizing mapping and positioning. Through experimental verification, it is found that our method achieves high-precision real-time positioning in indoor environments and successfully generates maps of indoor environments. This provides effective technical support for applications such as indoor navigation and robot control.Keywords: Manhattan world hypothesis, real-time positioning and mapping, feature matching, loopback detection
Procedia PDF Downloads 612697 Enhancement of Light Extraction of Luminescent Coating by Nanostructuring
Authors: Aubry Martin, Nehed Amara, Jeff Nyalosaso, Audrey Potdevin, FrançOis ReVeret, Michel Langlet, Genevieve Chadeyron
Abstract:
Energy-saving lighting devices based on LightEmitting Diodes (LEDs) combine a semiconductor chip emitting in the ultraviolet or blue wavelength region to one or more phosphor(s) deposited in the form of coatings. The most common ones combine a blue LED with the yellow phosphor Y₃Al₅O₁₂:Ce³⁺ (YAG:Ce) and a red phosphor. Even if these devices are characterized by satisfying photometric parameters (Color Rendering Index, Color Temperature) and good luminous efficiencies, further improvements can be carried out to enhance light extraction efficiency (increase in phosphor forward emission). One of the possible strategies is to pattern the phosphor coatings. Here, we have worked on different ways to nanostructure the coating surface. On the one hand, we used the colloidal lithography combined with the Langmuir-Blodgett technique to directly pattern the surface of YAG:Tb³⁺ sol-gel derived coatings, YAG:Tb³⁺ being used as phosphor model. On the other hand, we achieved composite architectures combining YAG:Ce coatings and ZnO nanowires. Structural, morphological and optical properties of both systems have been studied and compared to flat YAG coatings. In both cases, nanostructuring brought a significative enhancement of photoluminescence properties under UV or blue radiations. In particular, angle-resolved photoluminescence measurements have shown that nanostructuring modifies photons path within the coatings, with a better extraction of the guided modes. These two strategies have the advantage of being versatile and applicable to any phosphor synthesizable by sol-gel technique. They then appear as promising ways to enhancement luminescence efficiencies of both phosphor coatings and the optical devices into which they are incorporated, such as LED-based lighting or safety devices.Keywords: phosphor coatings, nanostructuring, light extraction, ZnO nanowires, colloidal lithography, LED devices
Procedia PDF Downloads 1762696 Multi-Criteria Optimal Management Strategy for in-situ Bioremediation of LNAPL Contaminated Aquifer Using Particle Swarm Optimization
Authors: Deepak Kumar, Jahangeer, Brijesh Kumar Yadav, Shashi Mathur
Abstract:
In-situ remediation is a technique which can remediate either surface or groundwater at the site of contamination. In the present study, simulation optimization approach has been used to develop management strategy for remediating LNAPL (Light Non-Aqueous Phase Liquid) contaminated aquifers. Benzene, toluene, ethyl benzene and xylene are the main component of LNAPL contaminant. Collectively, these contaminants are known as BTEX. In in-situ bioremediation process, a set of injection and extraction wells are installed. Injection wells supply oxygen and other nutrient which convert BTEX into carbon dioxide and water with the help of indigenous soil bacteria. On the other hand, extraction wells check the movement of plume along downstream. In this study, optimal design of the system has been done using PSO (Particle Swarm Optimization) algorithm. A comprehensive management strategy for pumping of injection and extraction wells has been done to attain a maximum allowable concentration of 5 ppm and 4.5 ppm. The management strategy comprises determination of pumping rates, the total pumping volume and the total running cost incurred for each potential injection and extraction well. The results indicate a high pumping rate for injection wells during the initial management period since it facilitates the availability of oxygen and other nutrients necessary for biodegradation, however it is low during the third year on account of sufficient oxygen availability. This is because the contaminant is assumed to have biodegraded by the end of the third year when the concentration drops to a permissible level.Keywords: groundwater, in-situ bioremediation, light non-aqueous phase liquid, BTEX, particle swarm optimization
Procedia PDF Downloads 4452695 Implementing a Database from a Requirement Specification
Abstract:
Creating a database scheme is essentially a manual process. From a requirement specification, the information contained within has to be analyzed and reduced into a set of tables, attributes and relationships. This is a time-consuming process that has to go through several stages before an acceptable database schema is achieved. The purpose of this paper is to implement a Natural Language Processing (NLP) based tool to produce a from a requirement specification. The Stanford CoreNLP version 3.3.1 and the Java programming were used to implement the proposed model. The outcome of this study indicates that the first draft of a relational database schema can be extracted from a requirement specification by using NLP tools and techniques with minimum user intervention. Therefore, this method is a step forward in finding a solution that requires little or no user intervention.Keywords: information extraction, natural language processing, relation extraction
Procedia PDF Downloads 2612694 The Extraction and Stripping of Hg(II) from Produced Water via Hollow Fiber Contactor
Authors: Dolapop Sribudda, Ura Pancharoen
Abstract:
The separation of Hg(II) from produced water by hollow fiber contactors (HFC) was investigation. This system included of two hollow fiber modules in the series connecting. The first module used for the extraction reaction and the second module for stripping reaction. Aliquat336 extractant was fed from the organic reservoirs into the shell side of the first hollow fiber module and continuous to the shell side of the second module. The organic liquid was continuously feed recirculate and back to the reservoirs. The feed solution was pumped into the lumen (tube side) of the first hollow fiber module. Simultaneously, the stripping solution was pumped in the same way in tube side of the second module. The feed and stripping solution was fed which had a counter current flow. Samples were kept in the outlet of feed and stripping solution for 1 hour and characterized concentration of Hg(II) by Inductively Couple Plasma Atomic Emission Spectroscopy (ICP-AES). Feed solution was produced water from natural gulf of Thailand. The extractant was Aliquat336 dissolved in kerosene diluent. Stripping solution used was nitric acid (HNO3) and thiourea (NH2CSNH2). The effect of carrier concentration and type of stripping solution were investigated. Results showed that the best condition were 10 % (v/v) Aliquat336 and 1.0 M NH2CSNH2. At the optimum condition, the extraction and stripping of Hg(II) were 98% and 44.2%, respectively.Keywords: Hg(II), hollow fiber contactor, produced water, wastewater treatment
Procedia PDF Downloads 4032693 Regression-Based Approach for Development of a Cuff-Less Non-Intrusive Cardiovascular Health Monitor
Authors: Pranav Gulati, Isha Sharma
Abstract:
Hypertension and hypotension are known to have repercussions on the health of an individual, with hypertension contributing to an increased probability of risk to cardiovascular diseases and hypotension resulting in syncope. This prompts the development of a non-invasive, non-intrusive, continuous and cuff-less blood pressure monitoring system to detect blood pressure variations and to identify individuals with acute and chronic heart ailments, but due to the unavailability of such devices for practical daily use, it becomes difficult to screen and subsequently regulate blood pressure. The complexities which hamper the steady monitoring of blood pressure comprises of the variations in physical characteristics from individual to individual and the postural differences at the site of monitoring. We propose to develop a continuous, comprehensive cardio-analysis tool, based on reflective photoplethysmography (PPG). The proposed device, in the form of an eyewear captures the PPG signal and estimates the systolic and diastolic blood pressure using a sensor positioned near the temporal artery. This system relies on regression models which are based on extraction of key points from a pair of PPG wavelets. The proposed system provides an edge over the existing wearables considering that it allows for uniform contact and pressure with the temporal site, in addition to minimal disturbance by movement. Additionally, the feature extraction algorithms enhance the integrity and quality of the extracted features by reducing unreliable data sets. We tested the system with 12 subjects of which 6 served as the training dataset. For this, we measured the blood pressure using a cuff based BP monitor (Omron HEM-8712) and at the same time recorded the PPG signal from our cardio-analysis tool. The complete test was conducted by using the cuff based blood pressure monitor on the left arm while the PPG signal was acquired from the temporal site on the left side of the head. This acquisition served as the training input for the regression model on the selected features. The other 6 subjects were used to validate the model by conducting the same test on them. Results show that the developed prototype can robustly acquire the PPG signal and can therefore be used to reliably predict blood pressure levels.Keywords: blood pressure, photoplethysmograph, eyewear, physiological monitoring
Procedia PDF Downloads 2792692 Ontology Mapping with R-GNN for IT Infrastructure: Enhancing Ontology Construction and Knowledge Graph Expansion
Authors: Andrey Khalov
Abstract:
The rapid growth of unstructured data necessitates advanced methods for transforming raw information into structured knowledge, particularly in domain-specific contexts such as IT service management and outsourcing. This paper presents a methodology for automatically constructing domain ontologies using the DOLCE framework as the base ontology. The research focuses on expanding ITIL-based ontologies by integrating concepts from ITSMO, followed by the extraction of entities and relationships from domain-specific texts through transformers and statistical methods like formal concept analysis (FCA). In particular, this work introduces an R-GNN-based approach for ontology mapping, enabling more efficient entity extraction and ontology alignment with existing knowledge bases. Additionally, the research explores transfer learning techniques using pre-trained transformer models (e.g., DeBERTa-v3-large) fine-tuned on synthetic datasets generated via large language models such as LLaMA. The resulting ontology, termed IT Ontology (ITO), is evaluated against existing methodologies, highlighting significant improvements in precision and recall. This study advances the field of ontology engineering by automating the extraction, expansion, and refinement of ontologies tailored to the IT domain, thus bridging the gap between unstructured data and actionable knowledge.Keywords: ontology mapping, knowledge graphs, R-GNN, ITIL, NER
Procedia PDF Downloads 162691 Automated Heart Sound Classification from Unsegmented Phonocardiogram Signals Using Time Frequency Features
Authors: Nadia Masood Khan, Muhammad Salman Khan, Gul Muhammad Khan
Abstract:
Cardiologists perform cardiac auscultation to detect abnormalities in heart sounds. Since accurate auscultation is a crucial first step in screening patients with heart diseases, there is a need to develop computer-aided detection/diagnosis (CAD) systems to assist cardiologists in interpreting heart sounds and provide second opinions. In this paper different algorithms are implemented for automated heart sound classification using unsegmented phonocardiogram (PCG) signals. Support vector machine (SVM), artificial neural network (ANN) and cartesian genetic programming evolved artificial neural network (CGPANN) without the application of any segmentation algorithm has been explored in this study. The signals are first pre-processed to remove any unwanted frequencies. Both time and frequency domain features are then extracted for training the different models. The different algorithms are tested in multiple scenarios and their strengths and weaknesses are discussed. Results indicate that SVM outperforms the rest with an accuracy of 73.64%.Keywords: pattern recognition, machine learning, computer aided diagnosis, heart sound classification, and feature extraction
Procedia PDF Downloads 2632690 Algae for Wastewater Treatment and CO₂ Sequestration along with Recovery of Bio-Oil and Value Added Products
Authors: P. Kiran Kumar, S. Vijaya Krishna, Kavita Verma1, V. Himabindu
Abstract:
Concern about global warming and energy security has led to increased biomass utilization as an alternative feedstock to fossil fuels. Biomass is a promising feedstock since it is abundant and cheap and can be transformed into fuels and chemical products. Microalgae biofuels are likely to have a much lower impact on the environment. Microalgae cultivation using sewage with industrial flue gases is a promising concept for integrated biodiesel production, CO₂ sequestration, and nutrients recovery. Autotrophic, Mixotrophic, and Heterotrophic are the three modes of cultivation for microalgae biomass. Several mechanical and chemical processes are available for the extraction of lipids/oily components from microalgae biomass. In organic solvent extraction methods, a prior drying of biomass and recovery of the solvent is required, which are energy-intensive. Thus, the hydrothermal process overcomes the drawbacks of conventional solvent extraction methods. In the hydrothermal process, the biomass is converted into oily components by processing in a hot, pressurized water environment. In this process, in addition to the lipid fraction of microalgae, other value-added products such as proteins, carbohydrates, and nutrients can also be recovered. In the present study was (Scenedesmus quadricauda) was isolated and cultivated in autotrophic, heterotrophic, and mixotrophically using sewage wastewater and industrial flue gas in batch and continuous mode. The harvested algae biomass from S. quadricauda was used for the recovery of lipids and bio-oil. The lipids were extracted from the algal biomass using sonication as a cell disruption method followed by solvent (Hexane) extraction, and the lipid yield obtained was 8.3 wt% with Palmitic acid, Oleic acid, and Octadeonoic acid as fatty acids. The hydrothermal process was also carried out for extraction of bio-oil, and the yield obtained was 18wt%. The bio-oil compounds such as nitrogenous compounds, organic acids, and esters, phenolics, hydrocarbons, and alkanes were obtained by the hydrothermal process of algal biomass. Nutrients such as NO₃⁻ (68%) and PO₄⁻ (15%) were also recovered along with bio-oil in the hydrothermal process.Keywords: flue gas, hydrothermal process, microalgae, sewage wastewater, sonication
Procedia PDF Downloads 1402689 Estimating Evapotranspiration Irrigated Maize in Brazil Using a Hybrid Modelling Approach and Satellite Image Inputs
Authors: Ivo Zution Goncalves, Christopher M. U. Neale, Hiran Medeiros, Everardo Mantovani, Natalia Souza
Abstract:
Multispectral and thermal infrared imagery from satellite sensors coupled with climate and soil datasets were used to estimate evapotranspiration and biomass in center pivots planted to maize in Brazil during the 2016 season. The hybrid remote sensing based model named Spatial EvapoTranspiration Modelling Interface (SETMI) was applied using multispectral and thermal infrared imagery from the Landsat Thematic Mapper instrument. Field data collected by the IRRIGER center pivot management company included daily weather information such as maximum and minimum temperature, precipitation, relative humidity for estimating reference evapotranspiration. In addition, soil water content data were obtained every 0.20 m in the soil profile down to 0.60 m depth throughout the season. Early season soil samples were used to obtain water-holding capacity, wilting point, saturated hydraulic conductivity, initial volumetric soil water content, layer thickness, and saturated volumetric water content. Crop canopy development parameters and irrigation application depths were also inputs of the model. The modeling approach is based on the reflectance-based crop coefficient approach contained within the SETMI hybrid ET model using relationships developed in Nebraska. The model was applied to several fields located in Minas Gerais State in Brazil with approximate latitude: -16.630434 and longitude: -47.192876. The model provides estimates of real crop evapotranspiration (ET), crop irrigation requirements and all soil water balance outputs, including biomass estimation using multi-temporal satellite image inputs. An interpolation scheme based on the growing degree-day concept was used to model the periods between satellite inputs, filling the gaps between image dates and obtaining daily data. Actual and accumulated ET, accumulated cold temperature and water stress and crop water requirements estimated by the model were compared with data measured at the experimental fields. Results indicate that the SETMI modeling approach using data assimilation, showed reliable daily ET and crop water requirements for maize, interpolated between remote sensing observations, confirming the applicability of the SETMI model using new relationships developed in Nebraska for estimating mainly ET and water requirements in Brazil under tropical conditions.Keywords: basal crop coefficient, irrigation, remote sensing, SETMI
Procedia PDF Downloads 1402688 Task Evoked Pupillary Response for Surgical Task Difficulty Prediction via Multitask Learning
Authors: Beilei Xu, Wencheng Wu, Lei Lin, Rachel Melnyk, Ahmed Ghazi
Abstract:
In operating rooms, excessive cognitive stress can impede the performance of a surgeon, while low engagement can lead to unavoidable mistakes due to complacency. As a consequence, there is a strong desire in the surgical community to be able to monitor and quantify the cognitive stress of a surgeon while performing surgical procedures. Quantitative cognitiveload-based feedback can also provide valuable insights during surgical training to optimize training efficiency and effectiveness. Various physiological measures have been evaluated for quantifying cognitive stress for different mental challenges. In this paper, we present a study using the cognitive stress measured by the task evoked pupillary response extracted from the time series eye-tracking measurements to predict task difficulties in a virtual reality based robotic surgery training environment. In particular, we proposed a differential-task-difficulty scale, utilized a comprehensive feature extraction approach, and implemented a multitask learning framework and compared the regression accuracy between the conventional single-task-based and three multitask approaches across subjects.Keywords: surgical metric, task evoked pupillary response, multitask learning, TSFresh
Procedia PDF Downloads 1462687 Deciphering Orangutan Drawing Behavior Using Artificial Intelligence
Authors: Benjamin Beltzung, Marie Pelé, Julien P. Renoult, Cédric Sueur
Abstract:
To this day, it is not known if drawing is specifically human behavior or if this behavior finds its origins in ancestor species. An interesting window to enlighten this question is to analyze the drawing behavior in genetically close to human species, such as non-human primate species. A good candidate for this approach is the orangutan, who shares 97% of our genes and exhibits multiple human-like behaviors. Focusing on figurative aspects may not be suitable for orangutans’ drawings, which may appear as scribbles but may have meaning. A manual feature selection would lead to an anthropocentric bias, as the features selected by humans may not match with those relevant for orangutans. In the present study, we used deep learning to analyze the drawings of a female orangutan named Molly († in 2011), who has produced 1,299 drawings in her last five years as part of a behavioral enrichment program at the Tama Zoo in Japan. We investigate multiple ways to decipher Molly’s drawings. First, we demonstrate the existence of differences between seasons by training a deep learning model to classify Molly’s drawings according to the seasons. Then, to understand and interpret these seasonal differences, we analyze how the information spreads within the network, from shallow to deep layers, where early layers encode simple local features and deep layers encode more complex and global information. More precisely, we investigate the impact of feature complexity on classification accuracy through features extraction fed to a Support Vector Machine. Last, we leverage style transfer to dissociate features associated with drawing style from those describing the representational content and analyze the relative importance of these two types of features in explaining seasonal variation. Content features were relevant for the classification, showing the presence of meaning in these non-figurative drawings and the ability of deep learning to decipher these differences. The style of the drawings was also relevant, as style features encoded enough information to have a classification better than random. The accuracy of style features was higher for deeper layers, demonstrating and highlighting the variation of style between seasons in Molly’s drawings. Through this study, we demonstrate how deep learning can help at finding meanings in non-figurative drawings and interpret these differences.Keywords: cognition, deep learning, drawing behavior, interpretability
Procedia PDF Downloads 1652686 A Fast and Robust Protocol for Reconstruction and Re-Enactment of Historical Sites
Authors: Sanaa I. Abu Alasal, Madleen M. Esbeih, Eman R. Fayyad, Rami S. Gharaibeh, Mostafa Z. Ali, Ahmed A. Freewan, Monther M. Jamhawi
Abstract:
This research proposes a novel reconstruction protocol for restoring missing surfaces and low-quality edges and shapes in photos of artifacts at historical sites. The protocol starts with the extraction of a cloud of points. This extraction process is based on four subordinate algorithms, which differ in the robustness and amount of resultant. Moreover, they use different -but complementary- accuracy to some related features and to the way they build a quality mesh. The performance of our proposed protocol is compared with other state-of-the-art algorithms and toolkits. The statistical analysis shows that our algorithm significantly outperforms its rivals in the resultant quality of its object files used to reconstruct the desired model.Keywords: meshes, point clouds, surface reconstruction protocols, 3D reconstruction
Procedia PDF Downloads 4572685 Hyperspectral Mapping Methods for Differentiating Mangrove Species along Karachi Coast
Authors: Sher Muhammad, Mirza Muhammad Waqar
Abstract:
It is necessary to monitor and identify mangroves types and spatial extent near coastal areas because it plays an important role in coastal ecosystem and environmental protection. This research aims at identifying and mapping mangroves types along Karachi coast ranging from 24.79 to 24.85 degree in latitude and 66.91 to 66.97 degree in longitude using hyperspectral remote sensing data and techniques. Image acquired during February, 2012 through Hyperion sensor have been used for this research. Image preprocessing includes geometric and radiometric correction followed by Minimum Noise Fraction (MNF) and Pixel Purity Index (PPI). The output of MNF and PPI has been analyzed by visualizing it in n-dimensions for end-member extraction. Well-distributed clusters on the n-dimensional scatter plot have been selected with the region of interest (ROI) tool as end members. These end members have been used as an input for classification techniques applied to identify and map mangroves species including Spectral Angle Mapper (SAM), Spectral Feature Fitting (SFF), and Spectral Information Diversion (SID). Only two types of mangroves namely Avicennia Marina (white mangroves) and Avicennia Germinans (black mangroves) have been observed throughout the study area.Keywords: mangrove, hyperspectral, hyperion, SAM, SFF, SID
Procedia PDF Downloads 3622684 Analytical Tools for Multi-Residue Analysis of Some Oxygenated Metabolites of PAHs (Hydroxylated, Quinones) in Sediments
Authors: I. Berger, N. Machour, F. Portet-Koltalo
Abstract:
Polycyclic aromatic hydrocarbons (PAHs) are toxic and carcinogenic pollutants produced in majority by incomplete combustion processes in industrialized and urbanized areas. After being emitted in atmosphere, these persistent contaminants are deposited to soils or sediments. Even if persistent, some can be partially degraded (photodegradation, biodegradation, chemical oxidation) and they lead to oxygenated metabolites (oxy-PAHs) which can be more toxic than their parent PAH. Oxy-PAHs are less measured than PAHs in sediments and this study aims to compare different analytical tools in order to extract and quantify a mixture of four hydroxylated PAHs (OH-PAHs) and four carbonyl PAHs (quinones) in sediments. Methodologies: Two analytical systems – HPLC with on-line UV and fluorescence detectors (HPLC-UV-FLD) and GC coupled to a mass spectrometer (GC-MS) – were compared to separate and quantify oxy-PAHs. Microwave assisted extraction (MAE) was optimized to extract oxy-PAHs from sediments. Results: First OH-PAHs and quinones were analyzed in HPLC with on-line UV and fluorimetric detectors. OH-PAHs were detected with the sensitive FLD, but the non-fluorescent quinones were detected with UV. The limits of detection (LOD)s obtained were in the range (2-3)×10-4 mg/L for OH-PAHs and (2-3)×10-3 mg/L for quinones. Second, even if GC-MS is not well adapted to the analysis of the thermodegradable OH-PAHs and quinones without any derivatization step, it was used because of the advantages of the detector in terms of identification and of GC in terms of efficiency. Without derivatization, only two of the four quinones were detected in the range 1-10 mg/L (LODs=0.3-1.2 mg/L) and LODs were neither very satisfying for the four OH-PAHs (0.18-0.6 mg/L). So two derivatization processes were optimized, comparing to literature: one for silylation of OH-PAHs, one for acetylation of quinones. Silylation using BSTFA/TCMS 99/1 was enhanced using a mixture of catalyst solvents (pyridine/ethyle acetate) and finding the appropriate reaction duration (5-60 minutes). Acetylation was optimized at different steps of the process, including the initial volume of compounds to derivatize, the added amounts of Zn (0.1-0.25 g), the nature of the derivatization product (acetic anhydride, heptafluorobutyric acid…) and the liquid/liquid extraction at the end of the process. After derivatization, LODs were decreased by a factor 3 for OH-PAHs and by a factor 4 for quinones, all the quinones being now detected. Thereafter, quinones and OH-PAHs were extracted from spiked sediments using microwave assisted extraction (MAE) followed by GC-MS analysis. Several mixtures of solvents of different volumes (10-25 mL) and using different extraction temperatures (80-120°C) were tested to obtain the best recovery yields. Satisfactory recoveries could be obtained for quinones (70-96%) and for OH-PAHs (70-104%). Temperature was a critical factor which had to be controlled to avoid oxy-PAHs degradation during the MAE extraction process. Conclusion: Even if MAE-GC-MS was satisfactory to analyze these oxy-PAHs, MAE optimization has to be carried on to obtain a most appropriate extraction solvent mixture, allowing a direct injection in the HPLC-UV-FLD system, which is more sensitive than GC-MS and does not necessitate a previous long derivatization step.Keywords: derivatizations for GC-MS, microwave assisted extraction, on-line HPLC-UV-FLD, oxygenated PAHs, polluted sediments
Procedia PDF Downloads 2872683 Lead in The Soil-Plant System Following Aged Contamination from Ceramic Wastes
Authors: F. Pedron, M. Grifoni, G. Petruzzelli, M. Barbafieri, I. Rosellini, B. Pezzarossa
Abstract:
Lead contamination of agricultural land mainly vegetated with perennial ryegrass (Lolium perenne) has been investigated. The metal derived from the discharge of sludge from a ceramic industry in the past had used lead paints. The results showed very high values of lead concentration in many soil samples. In order to assess the lead soil contamination, a sequential extraction with H2O, KNO3, EDTA was performed, and the chemical forms of lead in the soil were evaluated. More than 70% of lead was in a potentially bioavailable form. Analysis of Lolium perenne showed elevated lead concentration. A Freundlich-like model was used to describe the transferability of the metal from the soil to the plant.Keywords: bioavailability, Freundlich-like equation, sequential extraction, soil lead contamination
Procedia PDF Downloads 3102682 Evaluation of Lemongrass (Cymbopogon citratus) as Mosquito Repellent Extracted by Supercritical Carbon Dioxide Assisted Process
Authors: Chia-Yu Lin, Chun-Ying Lee, Chih-Jer Lin
Abstract:
Lemongrass (Cymbopogon citratus), grown in tropical and subtropical regions over the world, has many potential uses in pharmaceutical, cosmetics, food and flavor, and agriculture industries. In this study, because of its affinity to human body and friendliness to the environment, lemongrass extract was prepared from different processes to evaluate its effectiveness as mosquito repellent. Moreover, the supercritical fluid extraction method has been widely used as an effective and environmental friendly process in the preparation of a variety of compounds. Thus, both the extracts from lemongrass by the conventional hydrodistillation method and the supercritical CO₂ assisted method were compared. The effects of pressure, temperature and time duration on the supercritical CO₂ extraction were also investigated. The compositions of different extracts were examined using mass spectrometer. As for the experiment of mosquito repellence, the extract was placed inside a mosquito trap along with syrup. The mosquito counts in each trap with extracts prepared from different processes were employed in the quantitative evaluation. It was found that the extract from the supercritical CO₂ assisted process contained higher citronellol content than the conventional hydrodistillation method. The extract with higher citronellol content also demonstrated more effective as a mosquito repellent.Keywords: lemongrass (Cymbopogon citratus), hydrodistillation, supercritical fluid extraction, mosquito repellent
Procedia PDF Downloads 1742681 The Use of Optical-Radar Remotely-Sensed Data for Characterizing Geomorphic, Structural and Hydrologic Features and Modeling Groundwater Prospective Zones in Arid Zones
Authors: Mohamed Abdelkareem
Abstract:
Remote sensing data contributed on predicting the prospective areas of water resources. Integration of microwave and multispectral data along with climatic, hydrologic, and geological data has been used here. In this article, Sentinel-2, Landsat-8 Operational Land Imager (OLI), Shuttle Radar Topography Mission (SRTM), Tropical Rainfall Measuring Mission (TRMM), and Advanced Land Observing Satellite (ALOS) Phased Array Type L‐band Synthetic Aperture Radar (PALSAR) data were utilized to identify the geological, hydrologic and structural features of Wadi Asyuti which represents a defunct tributary of the Nile basin, in the eastern Sahara. The image transformation of Sentinel-2 and Landsat-8 data allowed characterizing the different varieties of rock units. Integration of microwave remotely-sensed data and GIS techniques provided information on physical characteristics of catchments and rainfall zones that are of a crucial role for mapping groundwater prospective zones. A fused Landsat-8 OLI and ALOS/PALSAR data improved the structural elements that difficult to reveal using optical data. Lineament extraction and interpretation indicated that the area is clearly shaped by the NE-SW graben that is cut by NW-SE trend. Such structures allowed the accumulation of thick sediments in the downstream area. Processing of recent OLI data acquired on March 15, 2014, verified the flood potential maps and offered the opportunity to extract the extent of the flooding zone of the recent flash flood event (March 9, 2014), as well as revealed infiltration characteristics. Several layers including geology, slope, topography, drainage density, lineament density, soil characteristics, rainfall, and morphometric characteristics were combined after assigning a weight for each using a GIS-based knowledge-driven approach. The results revealed that the predicted groundwater potential zones (GPZs) can be arranged into six distinctive groups, depending on their probability for groundwater, namely very low, low, moderate, high very, high, and excellent. Field and well data validated the delineated zones.Keywords: GIS, remote sensing, groundwater, Egypt
Procedia PDF Downloads 982680 Ultrasonic Extraction of Phenolics from Leaves of Shallots and Peels of Potatoes for Biofortification of Cheese
Authors: Lila Boulekbache-Makhlouf, Brahmi Fatiha
Abstract:
This study was carried out with the aim of enriching fresh cheese with the food by-products, which are the leaves of shallots and the peels of potatoes. Firstly, the conditions for extracting the total polyphenols (TPP) using ultrasound are optimized. Then, the contents of PPT, flavonoids, and antioxidant activity were evaluated for the extracts obtained by adopting the optimal parameter. On the other hand, we have carried out some physico-chemical, microbiological, and sensory analyzes of the cheese produced. The maximum PPT value of 70.44 mg GAE/g DM of shallot leaves was reached with 40% (v/v) ethanol, an extraction time of 90 min, and a temperature of 10°C. Meanwhile, the maximum TPP content of potato peels of 45.03 ± 4.16 mg GAE/g DM was obtained using an ethanol/water mixture (40%, v/v), a time of 30 min, and a temperature of 60°C and the flavonoid contents were 13.99 and 7.52 QE/g DM, respectively. From the antioxidant tests, we deduced that the potato peels present a higher antioxidant power with IC50s of 125.42 ± 2.78 μg/mL for DPPH, of 87.21 ± 7.72 μg/mL for phosphomolybdate and 200.77 ± 13.38 μg/mL for iron chelation, compared with the results obtained for shallot leaves which were 204.29 ± 0.09, 45.85 ± 3,46 and 1004.10 ± 145.73 μg/mL, respectively. The results of the physico-chemical analyzes have shown that the formulated cheese was compliant with standards. Microbiological analyzes show that the hygienic quality of the cheese produced was satisfactory. According to the sensory analyzes, the experts liked the cheese enriched with the powder and pieces of the leaves of the shallots.Keywords: shallots leaves, potato peels, ultrasound extraction, phenolic, cheese
Procedia PDF Downloads 1842679 Extraction and Characterization of Kernel Oil of Acrocomia Totai
Authors: Gredson Keif Souza, Nehemias Curvelo Pereira
Abstract:
Kernel oil from Macaúba is an important source of essential fatty acids. Thus, a new knowledge of the oil of this species could be used in new applications, such as pharmaceutical drugs based in the manufacture of cosmetics, and in various industrial processes. The aim of this study was to characterize the kernel oil of macaúba (Acrocomia Totai) at different times of their maturation. The physico-chemical characteristics were determined in accordance with the official analytical methods of oils and fats. It was determined the content of water and lipids in kernel, saponification value, acid value, water content in the oil, viscosity, density, composition in fatty acids by gas chromatography and molar mass. The results submitted to Tukey test for significant value to 5%. Found for the unripe fruits values superior to unsaturated fatty acids.Keywords: extraction, characterization, kernel oil, acrocomia totai
Procedia PDF Downloads 3562678 Short Text Classification for Saudi Tweets
Authors: Asma A. Alsufyani, Maram A. Alharthi, Maha J. Althobaiti, Manal S. Alharthi, Huda Rizq
Abstract:
Twitter is one of the most popular microblogging sites that allows users to publish short text messages called 'tweets'. Increasing the number of accounts to follow (followings) increases the number of tweets that will be displayed from different topics in an unclassified manner in the timeline of the user. Therefore, it can be a vital solution for many Twitter users to have their tweets in a timeline classified into general categories to save the user’s time and to provide easy and quick access to tweets based on topics. In this paper, we developed a classifier for timeline tweets trained on a dataset consisting of 3600 tweets in total, which were collected from Saudi Twitter and annotated manually. We experimented with the well-known Bag-of-Words approach to text classification, and we used support vector machines (SVM) in the training process. The trained classifier performed well on a test dataset, with an average F1-measure equal to 92.3%. The classifier has been integrated into an application, which practically proved the classifier’s ability to classify timeline tweets of the user.Keywords: corpus creation, feature extraction, machine learning, short text classification, social media, support vector machine, Twitter
Procedia PDF Downloads 1552677 Application of a Synthetic DNA Reference Material for Optimisation of DNA Extraction and Purification for Molecular Identification of Medicinal Plants
Authors: Mina Kalantarzadeh, Claire Lockie-Williams, Caroline Howard
Abstract:
DNA barcoding is increasingly used for identification of medicinal plants worldwide. In the last decade, a large number of DNA barcodes have been generated, and their application in species identification explored. The success of DNA barcoding process relies on the accuracy of the results from polymerase chain reaction (PCR) amplification step which could be negatively affected due to a presence of inhibitors or degraded DNA in herbal samples. An established DNA reference material can be used to support molecular characterisation protocols and prove system suitability, for fast and accurate identification of plant species. The present study describes the use of a novel reference material, the trnH-psbA British Pharmacopoeia Nucleic Acid Reference Material (trnH-psbA BPNARM), which was produced to aid in the identification of Ocimum tenuiflorum L., a widely used herb. During DNA barcoding of O. tenuiflorum, PCR amplifications of isolated DNA produced inconsistent results, suggesting an issue with either the method or DNA quality of the tested samples. The trnH-psbA BPNARM was produced and tested to check for the issues caused during PCR amplification. It was added to the plant material as control DNA before extraction and was co-extracted and amplified by PCR. PCR analyses revealed that the amplification was not as successful as expected which suggested that the amplification is affected by presence of inhibitors co-extracted from plant materials. Various potential issues were assessed during DNA extraction and optimisations were made accordingly. A DNA barcoding protocol for O. tenuiflorum was published in the British Pharmacopoeia 2016, which included the reference sequence. The trnH-psbA BPNARM accelerated degradation test which investigates the stability of the reference material over time demonstrated that it has been stable when stored at 56 °C for a year. Using this protocol and trnH-psbA reference material provides a fast and accurate method for identification of O. tenuiflorum. The optimisations of the DNA extraction using the trnH-psbA BPNARM provided a signposting method which can assist in overcoming common problems encountered when using molecular methods with medicinal plants.Keywords: degradation, DNA extraction, nucleic acid reference material, trnH-psbA
Procedia PDF Downloads 1992676 Electromagnetically-Vibrated Solid-Phase Microextraction for Organic Compounds
Authors: Soo Hyung Park, Seong Beom Kim, Wontae Lee, Jin Chul Joo, Jungmin Lee, Jongsoo Choi
Abstract:
A newly-developed electromagnetically vibrated solid-phase microextraction (SPME) device for extracting nonpolar organic compounds from aqueous matrices was evaluated in terms of sorption equilibrium time, precision, and detection level relative to three other more conventional extraction techniques involving SPME, viz., static, magnetic stirring, and fiber insertion/retraction. Electromagnetic vibration at 300~420 cycles/s was found to be the most efficient extraction technique in terms of reducing sorption equilibrium time and enhancing both precision and linearity. The increased efficiency for electromagnetic vibration was attributed to a greater reduction in the thickness of the stagnant-water layer that facilitated more rapid mass transport from the aqueous matrix to the SPME fiber. Electromagnetic vibration less than 500 cycles/s also did not detrimentally impact the sustainability of the extracting performance of the SPME fiber. Therefore, electromagnetically vibrated SPME may be a more powerful tool for rapid sampling and solvent-free sample preparation relative to other more conventional extraction techniques used with SPME.Keywords: electromagnetic vibration, organic compounds, precision, solid-phase microextraction (SPME), sorption equilibrium time
Procedia PDF Downloads 2542675 Information Extraction for Short-Answer Question for the University of the Cordilleras
Authors: Thelma Palaoag, Melanie Basa, Jezreel Mark Panilo
Abstract:
Checking short-answer questions and essays, whether it may be paper or electronic in form, is a tiring and tedious task for teachers. Evaluating a student’s output require wide array of domains. Scoring the work is often a critical task. Several attempts in the past few years to create an automated writing assessment software but only have received negative results from teachers and students alike due to unreliability in scoring, does not provide feedback and others. The study aims to create an application that will be able to check short-answer questions which incorporate information extraction. Information extraction is a subfield of Natural Language Processing (NLP) where a chunk of text (technically known as unstructured text) is being broken down to gather necessary bits of data and/or keywords (structured text) to be further analyzed or rather be utilized by query tools. The proposed system shall be able to extract keywords or phrases from the individual’s answers to match it into a corpora of words (as defined by the instructor), which shall be the basis of evaluation of the individual’s answer. The proposed system shall also enable the teacher to provide feedback and re-evaluate the output of the student for some writing elements in which the computer cannot fully evaluate such as creativity and logic. Teachers can formulate, design, and check short answer questions efficiently by defining keywords or phrases as parameters by assigning weights for checking answers. With the proposed system, teacher’s time in checking and evaluating students output shall be lessened, thus, making the teacher more productive and easier.Keywords: information extraction, short-answer question, natural language processing, application
Procedia PDF Downloads 428