Search results for: end-user trained information extraction
12993 Learning the C-A-Bs: Resuscitation Training at Rwanda Military Hospital
Authors: Kathryn Norgang, Sarah Howrath, Auni Idi Muhire, Pacifique Umubyeyi
Abstract:
Description : A group of nurses address the shortage of trained staff to respond to critical patients at Rwanda Military Hospital (RMH) by developing a training program and a resuscitation response team. Members of the group who received the training when it first launched are now trainer of trainers; all components of the training program are organized and delivered by RMH staff-the clinical mentor only provides adjunct support. This two day training is held quarterly at RMH; basic life support and exposure to interventions for advanced care are included in the test and skills sign off. Seventy staff members have received the training this year alone. An increased number of admission/transfer to ICU due to successful resuscitation attempts is noted. Lessons learned: -Number of staff trained 2012-2014 (to be verified). -Staff who train together practice with greater collaboration during actual resuscitation events. -Staff more likely to initiate BLS if peer support is present-more staff trained equals more support. -More access to Advanced Cardiac Life Support training is necessary now that the cadre of BLS trained staff is growing. Conclusions: Increased access to training, peer support, and collaborative practice are effective strategies to strengthening resuscitation capacity within a hospital.Keywords: resuscitation, basic life support, capacity building, resuscitation response teams, nurse trainer of trainers
Procedia PDF Downloads 30412992 Using Computer Vision to Detect and Localize Fractures in Wrist X-ray Images
Authors: John Paul Q. Tomas, Mark Wilson L. de los Reyes, Kirsten Joyce P. Vasquez
Abstract:
The most frequent type of fracture is a wrist fracture, which often makes it difficult for medical professionals to find and locate. In this study, fractures in wrist x-ray pictures were located and identified using deep learning and computer vision. The researchers used image filtering, masking, morphological operations, and data augmentation for the image preprocessing and trained the RetinaNet and Faster R-CNN models with ResNet50 backbones and Adam optimizers separately for each image filtering technique and projection. The RetinaNet model with Anisotropic Diffusion Smoothing filter trained with 50 epochs has obtained the greatest accuracy of 99.14%, precision of 100%, sensitivity/recall of 98.41%, specificity of 100%, and an IoU score of 56.44% for the Posteroanterior projection utilizing augmented data. For the Lateral projection using augmented data, the RetinaNet model with an Anisotropic Diffusion filter trained with 50 epochs has produced the highest accuracy of 98.40%, precision of 98.36%, sensitivity/recall of 98.36%, specificity of 98.43%, and an IoU score of 58.69%. When comparing the test results of the different individual projections, models, and image filtering techniques, the Anisotropic Diffusion filter trained with 50 epochs has produced the best classification and regression scores for both projections.Keywords: Artificial Intelligence, Computer Vision, Wrist Fracture, Deep Learning
Procedia PDF Downloads 7312991 Local Spectrum Feature Extraction for Face Recognition
Authors: Muhammad Imran Ahmad, Ruzelita Ngadiran, Mohd Nazrin Md Isa, Nor Ashidi Mat Isa, Mohd ZaizuIlyas, Raja Abdullah Raja Ahmad, Said Amirul Anwar Ab Hamid, Muzammil Jusoh
Abstract:
This paper presents two technique, local feature extraction using image spectrum and low frequency spectrum modelling using GMM to capture the underlying statistical information to improve the performance of face recognition system. Local spectrum features are extracted using overlap sub block window that are mapping on the face image. For each of this block, spatial domain is transformed to frequency domain using DFT. A low frequency coefficient is preserved by discarding high frequency coefficients by applying rectangular mask on the spectrum of the facial image. Low frequency information is non Gaussian in the feature space and by using combination of several Gaussian function that has different statistical properties, the best feature representation can be model using probability density function. The recognition process is performed using maximum likelihood value computed using pre-calculate GMM components. The method is tested using FERET data sets and is able to achieved 92% recognition rates.Keywords: local features modelling, face recognition system, Gaussian mixture models, Feret
Procedia PDF Downloads 66712990 Quantitative Assessment of Road Infrastructure Health Using High-Resolution Remote Sensing Data
Authors: Wang Zhaoming, Shao Shegang, Chen Xiaorong, Qi Yanan, Tian Lei, Wang Jian
Abstract:
This study conducts a comparative analysis of the spectral curves of asphalt pavements at various aging stages to improve road information extraction from high-resolution remote sensing imagery. By examining the distinguishing capabilities and spectral characteristics, the research aims to establish a pavement information extraction methodology based on China's high-resolution satellite images. The process begins by analyzing the spectral features of asphalt pavements to construct a spectral assessment model suitable for evaluating pavement health. This model is then tested at a national highway traffic testing site in China, validating its effectiveness in distinguishing different pavement aging levels. The study's findings demonstrate that the proposed model can accurately assess road health, offering a valuable tool for road maintenance planning and infrastructure management.Keywords: spectral analysis, asphalt pavement aging, high-resolution remote sensing, pavement health assessment
Procedia PDF Downloads 2112989 Distribution of Phospholipids, Cholesterol and Carotenoids in Two-Solvent System during Egg Yolk Oil Solvent Extraction
Authors: Aleksandrs Kovalcuks, Mara Duma
Abstract:
Egg yolk oil is a concentrated source of egg bioactive compounds, such as fat-soluble vitamins, phospholipids, cholesterol, carotenoids and others. To extract lipids and other fat-soluble nutrients from liquid egg yolk, a two-step extraction process involving polar (ethanol) and non-polar (hexane) solvents were used. This extraction technique was based on egg yolk bioactive compounds polarities, where non-polar compound was extracted into non-polar hexane, but polar in to polar alcohol/water phase. But many egg yolk bioactive compounds are not strongly polar or non-polar. Egg yolk phospholipids, cholesterol and pigments are amphipatic (have both polar and non-polar regions) and their behavior in ethanol/hexane solvent system is not clear. The aim of this study was to clarify the behavior of phospholipids, cholesterol and carotenoids during extraction of egg yolk oil with ethanol and hexane and determine the loss of these compounds in egg yolk oil. Egg yolks and egg yolk oil were analyzed for phospholipids (phosphatidylcholine (PC) and phosphatidylethanolamine (PE)), cholesterol and carotenoids (lutein, zeaxanthin, canthaxanthin and β-carotene) content using GC-FID and HPLC methods. PC and PE are polar lipids and were extracted into polar ethanol phase. Concentration of PC in ethanol was 97.89% and PE 99.81% from total egg yolk phospholipids. Due to cholesterol’s partial extraction into ethanol, cholesterol content in egg yolk oil was reduced in comparison to its total content presented in egg yolk lipids. The highest amount of lutein and zeaxanthin was concentrated in ethanol extract. The opposite situation was observed with canthaxanthin and β-carotene, which became the main pigments of egg yolk oil.Keywords: cholesterol, egg yolk oil, lutein, phospholipids, solvent extraction
Procedia PDF Downloads 50912988 Neighborhood Graph-Optimized Preserving Discriminant Analysis for Image Feature Extraction
Authors: Xiaoheng Tan, Xianfang Li, Tan Guo, Yuchuan Liu, Zhijun Yang, Hongye Li, Kai Fu, Yufang Wu, Heling Gong
Abstract:
The image data collected in reality often have high dimensions, and it contains noise and redundant information. Therefore, it is necessary to extract the compact feature expression of the original perceived image. In this process, effective use of prior knowledge such as data structure distribution and sample label is the key to enhance image feature discrimination and robustness. Based on the above considerations, this paper proposes a local preserving discriminant feature learning model based on graph optimization. The model has the following characteristics: (1) Locality preserving constraint can effectively excavate and preserve the local structural relationship between data. (2) The flexibility of graph learning can be improved by constructing a new local geometric structure graph using label information and the nearest neighbor threshold. (3) The L₂,₁ norm is used to redefine LDA, and the diagonal matrix is introduced as the scale factor of LDA, and the samples are selected, which improves the robustness of feature learning. The validity and robustness of the proposed algorithm are verified by experiments in two public image datasets.Keywords: feature extraction, graph optimization local preserving projection, linear discriminant analysis, L₂, ₁ norm
Procedia PDF Downloads 14912987 Simple Modified Method for DNA Isolation from Lyophilised Cassava Storage Roots (Manihot esculenta Crantz.)
Authors: P. K. Telengech, K. Monjero, J. Maling’a, A. Nyende, S. Gichuki
Abstract:
There is need to identify an efficient protocol for use in extraction of high quality DNA for purposes of molecular work. Cassava roots are known for their high starch content, polyphenols and other secondary metabolites which interfere with the quality of the DNA. These factors have negative interference on the various methodologies for DNA extraction. There is need to develop a simple, fast and inexpensive protocol that yields high quality DNA. In this improved Dellaporta method, the storage roots are lyophilized to reduce the water content; the extraction buffer is modified to eliminate the high polyphenols, starch and wax. This simple protocol was compared to other protocols intended for plants with similar secondary metabolites. The method gave high yield (300-950ng) and pure DNA for use in PCR analysis. This improved Dellaporta protocol allows isolation of pure DNA from starchy cassava storage roots.Keywords: cassava storage roots, dellaporta, DNA extraction, lyophilisation, polyphenols secondary metabolites
Procedia PDF Downloads 36312986 Performance Study of Neodymium Extraction by Carbon Nanotubes Assisted Emulsion Liquid Membrane Using Response Surface Methodology
Authors: Payman Davoodi-Nasab, Ahmad Rahbar-Kelishami, Jaber Safdari, Hossein Abolghasemi
Abstract:
The high purity rare earth elements (REEs) have been vastly used in the field of chemical engineering, metallurgy, nuclear energy, optical, magnetic, luminescence and laser materials, superconductors, ceramics, alloys, catalysts, and etc. Neodymium is one of the most abundant rare earths. By development of a neodymium–iron–boron (Nd–Fe–B) permanent magnet, the importance of neodymium has dramatically increased. Solvent extraction processes have many operational limitations such as large inventory of extractants, loss of solvent due to the organic solubility in aqueous solutions, volatilization of diluents, etc. One of the promising methods of liquid membrane processes is emulsion liquid membrane (ELM) which offers an alternative method to the solvent extraction processes. In this work, a study on Nd extraction through multi-walled carbon nanotubes (MWCNTs) assisted ELM using response surface methodology (RSM) has been performed. The ELM composed of diisooctylphosphinic acid (CYANEX 272) as carrier, MWCNTs as nanoparticles, Span-85 (sorbitan triooleate) as surfactant, kerosene as organic diluent and nitric acid as internal phase. The effects of important operating variables namely, surfactant concentration, MWCNTs concentration, and treatment ratio were investigated. Results were optimized using a central composite design (CCD) and a regression model for extraction percentage was developed. The 3D response surfaces of Nd(III) extraction efficiency were achieved and significance of three important variables and their interactions on the Nd extraction efficiency were found out. Results indicated that introducing the MWCNTs to the ELM process led to increasing the Nd extraction due to higher stability of membrane and mass transfer enhancement. MWCNTs concentration of 407 ppm, Span-85 concentration of 2.1 (%v/v) and treatment ratio of 10 were achieved as the optimum conditions. At the optimum condition, the extraction of Nd(III) reached the maximum of 99.03%.Keywords: emulsion liquid membrane, extraction of neodymium, multi-walled carbon nanotubes, response surface method
Procedia PDF Downloads 25512985 Simultaneous Extraction and Estimation of Steroidal Glycosides and Aglycone of Solanum
Authors: Karishma Chester, Sarvesh Paliwal, Sayeed Ahmad
Abstract:
Solanumnigrum L. (Family: Solanaceae), is an important Indian medicinal plant and have been used in various traditional formulations for hepato-protection. It has been reported to contain significant amount of steroidal glycosides such as solamargine and solasonine as well as their aglycone part solasodine. Being important pharmacologically active metabolites of several members of Solanaceae these markers have been attempted various times for their extraction and quantification but separately for glycoside and aglycone part because of their opposite polarity. Here, we propose for the first time simultaneous extraction and quantification of aglycone (solasodine)and glycosides (solamargine and solasonine) inleaves and berries of S.nigrumusing solvent extraction followed by HPTLC analysis. Simultaneous extraction was carried out by sonication in mixture of chloroform and methanol as solvent. The quantification was done using silica gel 60F254HPTLC plates as stationary phase and chloroform: methanol: acetone: 0.5 % ammonia (7: 2.5: 1: 0.4 v/v/v/v) as mobile phaseat 400 nm, after derivatization with an isaldehydesul furic acid reagent. The method was validated as per ICH guideline for calibration, linearity, precision, recovery, robustness, specificity, LOD, and LOQ. The statistical data obtained for validation showed that method can be used routinely for quality control of various solanaceous drugs reported for these markers as well as traditional formulations containing those plants as an ingredient.Keywords: solanumnigrum, solasodine, solamargine, solasonine, quantification
Procedia PDF Downloads 32912984 The Mechanism Study of Degradative Solvent Extraction of Biomass by Liquid Membrane-Fourier Transform Infrared Spectroscopy
Authors: W. Ketren, J. Wannapeera, Z. Heishun, A. Ryuichi, K. Toshiteru, M. Kouichi, O. Hideaki
Abstract:
Degradative solvent extraction is the method developed for biomass upgrading by dewatering and fractionation of biomass under the mild condition. However, the conversion mechanism of the degradative solvent extraction method has not been fully understood so far. The rice straw was treated in 1-methylnaphthalene (1-MN) at a different solvent-treatment temperature varied from 250 to 350 oC with the residence time for 60 min. The liquid membrane-Fourier Transform Infrared Spectroscopy (FTIR) technique is applied to study the processing mechanism in-depth without separation of the solvent. It has been found that the strength of the oxygen-hydrogen stretching (3600-3100 cm-1) decreased slightly with increasing temperature in the range of 300-350 oC. The decrease of the hydroxyl group in the solvent soluble suggested dehydration reaction taking place between 300 and 350 oC. FTIR spectra in the carbonyl stretching region (1800-1600 cm-1) revealed the presence of esters groups, carboxylic acid and ketonic groups in the solvent-soluble of biomass. The carboxylic acid increased in the range of 200 to 250 oC and then decreased. The prevailing of aromatic groups showed that the aromatization took place during extraction at above 250 oC. From 300 to 350 oC, the carbonyl functional groups in the solvent-soluble noticeably decreased. The removal of the carboxylic acid and the decrease of esters into the form of carbon dioxide indicated that the decarboxylation reaction occurred during the extraction process.Keywords: biomass waste, degradative solvent extraction, mechanism, upgrading
Procedia PDF Downloads 28512983 Synthetic Cannabinoids: Extraction, Identification and Purification
Authors: Niki K. Burns, James R. Pearson, Paul G. Stevenson, Xavier A. Conlan
Abstract:
In Australian state Victoria, synthetic cannabinoids have recently been made illegal under an amendment to the drugs, poisons and controlled substances act 1981. Identification of synthetic cannabinoids in popular brands of ‘incense’ and ‘potpourri’ has been a difficult and challenging task due to the sample complexity and changes observed in the chemical composition of the cannabinoids of interest. This study has developed analytical methodology for the targeted extraction and determination of synthetic cannabinoids available pre-ban. A simple solvent extraction and solid phase extraction methodology was developed that selectively extracted the cannabinoid of interest. High performance liquid chromatography coupled with UV‐visible and chemiluminescence detection (acidic potassium permanganate and tris (2,2‐bipyridine) ruthenium(III)) were used to interrogate the synthetic cannabinoid products. Mass spectrometry and nuclear magnetic resonance spectroscopy were used for structural elucidation of the synthetic cannabinoids. The tris(2,2‐bipyridine)ruthenium(III) detection was found to offer better sensitivity than the permanganate based reagents. In twelve different brands of herbal incense, cannabinoids were extracted and identified including UR‐144, XLR 11, AM2201, 5‐F‐AKB48 and A796‐260.Keywords: electrospray mass spectrometry, high performance liquid chromatography, solid phase extraction, synthetic cannabinoids
Procedia PDF Downloads 46812982 Volatile Profile of Monofloral Honeys Produced by Stingless Bees from the Brazilian Semiarid Region
Authors: Ana Caroliny Vieira da Costa, Marta Suely Madruga
Abstract:
In Brazil, there is a diverse fauna of social bees, known by Meliponinae or native stingless bees. These bees are important for providing a differentiated product, especially regarding unique sweetness, flavor, and aroma. However, information about the volatile fraction in honey produced by stingless native bees is still lacking. The aim of this work was to characterize the volatile compound profile of monofloral honey produced by jandaíra bees (Melipona subnitida Ducke) which used chanana (Turnera ulmifolia L.), malícia (Mimosa quadrivalvis) and algaroba (Prosopis juliflora (Sw.) DC) as their floral sources; and by uruçu bees (Melipona scutellaris Latrelle), which used chanana (Turnera ulmifolia L.), malícia (Mimosa quadrivalvis) and angico (Anadenanthera colubrina) as their floral sources. The volatiles were extracted using HS-SPME-GC-MS technique. The condition for the extraction was: equilibration time of 15 minutes, extraction time of 45 min and extraction temperature of 45°C. Through the results obtained, it was observed that the floral source had a strong influence on the aroma profile of the honey under evaluation, since the chemical profiles were marked primarily by the classes of terpenes, norisoprenoids, and benzene derivatives. Furthermore, the results obtained suggest the existence of differentiator compounds and potential markers for the botanical sources evaluated, such as linalool, D-sylvestrene, rose oxide and benzenethanol. These reports represent a valuable contribution to certifying the authenticity of those honey and provides for the first time, information intended for the construction of chemical knowledge of the aroma and flavor that characterize these honey produced in Brazil.Keywords: aroma, honey, semiarid, stingless, volatiles
Procedia PDF Downloads 25712981 Automatic Extraction of Arbitrarily Shaped Buildings from VHR Satellite Imagery
Authors: Evans Belly, Imdad Rizvi, M. M. Kadam
Abstract:
Satellite imagery is one of the emerging technologies which are extensively utilized in various applications such as detection/extraction of man-made structures, monitoring of sensitive areas, creating graphic maps etc. The main approach here is the automated detection of buildings from very high resolution (VHR) optical satellite images. Initially, the shadow, the building and the non-building regions (roads, vegetation etc.) are investigated wherein building extraction is mainly focused. Once all the landscape is collected a trimming process is done so as to eliminate the landscapes that may occur due to non-building objects. Finally the label method is used to extract the building regions. The label method may be altered for efficient building extraction. The images used for the analysis are the ones which are extracted from the sensors having resolution less than 1 meter (VHR). This method provides an efficient way to produce good results. The additional overhead of mid processing is eliminated without compromising the quality of the output to ease the processing steps required and time consumed.Keywords: building detection, shadow detection, landscape generation, label, partitioning, very high resolution (VHR) satellite imagery
Procedia PDF Downloads 31412980 Comparison of Microwave-Assisted and Conventional Leaching for Extraction of Copper from Chalcopyrite Concentrate
Authors: Ayfer Kilicarslan, Kubra Onol, Sercan Basit, Muhlis Nezihi Saridede
Abstract:
Chalcopyrite (CuFeS2) is the most common primary mineral used for the commercial production of copper. The low dissolution efficiency of chalcopyrite in sulfate media has prevented an efficient industrial leaching of this mineral in sulfate media. Ferric ions, bacteria, oxygen and other oxidants have been used as oxidizing agents in the leaching of chalcopyrite in sulfate and chloride media under atmospheric or pressure leaching conditions. Two leaching methods were studied to evaluate chalcopyrite (CuFeS2) dissolution in acid media. First, the conventional oxidative acid leaching method was carried out using sulfuric acid (H2SO4) and potassium dichromate (K2Cr2O7) as oxidant at atmospheric pressure. Second, microwave-assisted acid leaching was performed using the microwave accelerated reaction system (MARS) for same reaction media. Parameters affecting the copper extraction such as leaching time, leaching temperature, concentration of H2SO4 and concentration of K2Cr2O7 were investigated. The results of conventional acid leaching experiments were compared to the microwave leaching method. It was found that the copper extraction obtained under high temperature and high concentrations of oxidant with microwave leaching is higher than those obtained conventionally. 81% copper extraction was obtained by the conventional oxidative acid leaching method in 180 min, with the concentration of 0.3 mol/L K2Cr2O7 in 0.5M H2SO4 at 50 ºC, while 93.5% copper extraction was obtained in 60 min with microwave leaching method under same conditions.Keywords: extraction, copper, microwave-assisted leaching, chalcopyrite, potassium dichromate
Procedia PDF Downloads 37012979 Multi-Stage Classification for Lung Lesion Detection on CT Scan Images Applying Medical Image Processing Technique
Authors: Behnaz Sohani, Sahand Shahalinezhad, Amir Rahmani, Aliyu Aliyu
Abstract:
Recently, medical imaging and specifically medical image processing is becoming one of the most dynamically developing areas of medical science. It has led to the emergence of new approaches in terms of the prevention, diagnosis, and treatment of various diseases. In the process of diagnosis of lung cancer, medical professionals rely on computed tomography (CT) scans, in which failure to correctly identify masses can lead to incorrect diagnosis or sampling of lung tissue. Identification and demarcation of masses in terms of detecting cancer within lung tissue are critical challenges in diagnosis. In this work, a segmentation system in image processing techniques has been applied for detection purposes. Particularly, the use and validation of a novel lung cancer detection algorithm have been presented through simulation. This has been performed employing CT images based on multilevel thresholding. The proposed technique consists of segmentation, feature extraction, and feature selection and classification. More in detail, the features with useful information are selected after featuring extraction. Eventually, the output image of lung cancer is obtained with 96.3% accuracy and 87.25%. The purpose of feature extraction applying the proposed approach is to transform the raw data into a more usable form for subsequent statistical processing. Future steps will involve employing the current feature extraction method to achieve more accurate resulting images, including further details available to machine vision systems to recognise objects in lung CT scan images.Keywords: lung cancer detection, image segmentation, lung computed tomography (CT) images, medical image processing
Procedia PDF Downloads 10112978 Extraction of the Volatile Oils of Dictyopteris Membranacea by Focused Microwave Assisted Hydrodistillation and Supercritical Carbon Dioxide: Chemical Composition and Kinetic Data
Authors: Mohamed El Hattab
Abstract:
The Supercritical carbon dioxide (SFE) and the focused microwave-assisted hydrodistillation (FMAHD) were employed to isolate the volatile fraction of the brown alga Dictyopteris membranacea from the crude extract. The volatiles fractions obtained were analyzed by GC/MS. The major compounds in this case: dictyopterene A, 6-butylcyclohepta-1,4-diene, Undec-1-en-3-one, Undeca-1,4-dien-3-one, (3-oxoundec-4-enyl) sulphur, tetradecanoic acid, hexadecanoic acid, 3-hexyl-4,5-dithia-cycloheptanone and albicanol (this later is present only in the FMAHD oil) are identified by comparing their mass spectra with those reported on the commercial MS data base and also on our previously work. A kinetic study realized on both extraction processes and followed by an external standard quantification has allowed the study of the mass percent evolution of the major compounds in the two oils, an empirical mathematical modelling was used to describe their kinetic extraction.Keywords: dictyopteris membranacea, extraction techniques, mathematical modeling, volatile oils
Procedia PDF Downloads 42812977 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring
Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti
Abstract:
Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement
Procedia PDF Downloads 12312976 Phylogenetic Differential Separation of Environmental Samples
Authors: Amber C. W. Vandepoele, Michael A. Marciano
Abstract:
Biological analyses frequently focus on single organisms, however many times, the biological sample consists of more than the target organism; for example, human microbiome research targets bacterial DNA, yet most samples consist largely of human DNA. Therefore, there would be an advantage to removing these contaminating organisms. Conversely, some analyses focus on a single organism but would greatly benefit from the additional information regarding the other organismal components of the sample. Forensic analysis is one such example, wherein most forensic casework, human DNA is targeted; however, it typically exists in complex non-pristine sample substrates such as soil or unclean surfaces. These complex samples are commonly comprised of not just human tissue but also microbial and plant life, where these organisms may help gain more forensically relevant information about a specific location or interaction. This project aims to optimize a ‘phylogenetic’ differential extraction method that will separate mammalian, bacterial and plant cells in a mixed sample. This is accomplished through the use of size exclusion separation, whereby the different cell types are separated through multiple filtrations using 5 μm filters. The components are then lysed via differential enzymatic sensitivities among the cells and extracted with minimal contribution from the preceding component. This extraction method will then allow complex DNA samples to be more easily interpreted through non-targeting sequencing since the data will not be skewed toward the smaller and usually more numerous bacterial DNAs. This research project has demonstrated that this ‘phylogenetic’ differential extraction method successfully separated the epithelial and bacterial cells from each other with minimal cell loss. We will take this one step further, showing that when adding the plant cells into the mixture, they will be separated and extracted from the sample. Research is ongoing, and results are pending.Keywords: DNA isolation, geolocation, non-human, phylogenetic separation
Procedia PDF Downloads 11212975 Extraction of Phycocyanin from Spirulina platensis by Isoelectric Point Precipitation and Salting Out for Scale Up Processes
Authors: Velasco-Rendón María Del Carmen, Cuéllar-Bermúdez Sara Paulina, Parra-Saldívar Roberto
Abstract:
Phycocyanin is a blue pigment protein with fluorescent activity produced by cyanobacteria. It has been recently studied to determine its anticancer, antioxidant and antiinflamatory potential. Since 2014 it was approved as a Generally Recognized As Safe (GRAS) proteic pigment for the food industry. Therefore, phycocyanin shows potential for the food, nutraceutical, pharmaceutical and diagnostics industry. Conventional phycocyanin extraction includes buffer solutions and ammonium sulphate followed by chromatography or ATPS for protein separation. Therefore, further purification steps are time-requiring, energy intensive and not suitable for scale-up processing. This work presents an alternative to conventional methods that also allows large scale application with commercially available equipment. The extraction was performed by exposing the dry biomass to mechanical cavitation and salting out with NaCl to use an edible reagent. Also, isoelectric point precipitation was used by addition of HCl and neutralization with NaOH. The results were measured and compared in phycocyanin concentration, purity and extraction yield. Results showed that the best extraction condition was the extraction by salting out with 0.20 M NaCl after 30 minutes cavitation, with a concentration in the supernatant of 2.22 mg/ml, a purity of 3.28 and recovery from crude extract of 81.27%. Mechanical cavitation presumably increased the solvent-biomass contact, making the crude extract visibly dark blue after centrifugation. Compared to other systems, our process has less purification steps, similar concentrations in the phycocyanin-rich fraction and higher purity. The contaminants present in our process edible NaCl or low pHs that can be neutralized. It also can be adapted to a semi-continuous process with commercially available equipment. This characteristics make this process an appealing alternative for phycocyanin extraction as a pigment for the food industry.Keywords: extraction, phycocyanin, precipitation, scale-up
Procedia PDF Downloads 43812974 A Deep Learning Approach to Subsection Identification in Electronic Health Records
Authors: Nitin Shravan, Sudarsun Santhiappan, B. Sivaselvan
Abstract:
Subsection identification, in the context of Electronic Health Records (EHRs), is identifying the important sections for down-stream tasks like auto-coding. In this work, we classify the text present in EHRs according to their information, using machine learning and deep learning techniques. We initially describe briefly about the problem and formulate it as a text classification problem. Then, we discuss upon the methods from the literature. We try two approaches - traditional feature extraction based machine learning methods and deep learning methods. Through experiments on a private dataset, we establish that the deep learning methods perform better than the feature extraction based Machine Learning Models.Keywords: deep learning, machine learning, semantic clinical classification, subsection identification, text classification
Procedia PDF Downloads 21712973 Words Spotting in the Images Handwritten Historical Documents
Authors: Issam Ben Jami
Abstract:
Information retrieval in digital libraries is very important because most famous historical documents occupy a significant value. The word spotting in historical documents is a very difficult notion, because automatic recognition of such documents is naturally cursive, it represents a wide variability in the level scale and translation words in the same documents. We first present a system for the automatic recognition, based on the extraction of interest points words from the image model. The extraction phase of the key points is chosen from the representation of the image as a synthetic description of the shape recognition in a multidimensional space. As a result, we use advanced methods that can find and describe interesting points invariant to scale, rotation and lighting which are linked to local configurations of pixels. We test this approach on documents of the 15th century. Our experiments give important results.Keywords: feature matching, historical documents, pattern recognition, word spotting
Procedia PDF Downloads 27412972 Domain specific Ontology-Based Knowledge Extraction Using R-GNN and Large Language Models
Authors: Andrey Khalov
Abstract:
The rapid proliferation of unstructured data in IT infrastructure management demands innovative approaches for extracting actionable knowledge. This paper presents a framework for ontology-based knowledge extraction that combines relational graph neural networks (R-GNN) with large language models (LLMs). The proposed method leverages the DOLCE framework as the foundational ontology, extending it with concepts from ITSMO for domain-specific applications in IT service management and outsourcing. A key component of this research is the use of transformer-based models, such as DeBERTa-v3-large, for automatic entity and relationship extraction from unstructured texts. Furthermore, the paper explores how transfer learning techniques can be applied to fine-tune large language models (LLaMA) for using to generate synthetic datasets to improve precision in BERT-based entity recognition and ontology alignment. The resulting IT Ontology (ITO) serves as a comprehensive knowledge base that integrates domain-specific insights from ITIL processes, enabling more efficient decision-making. Experimental results demonstrate significant improvements in knowledge extraction and relationship mapping, offering a cutting-edge solution for enhancing cognitive computing in IT service environments.Keywords: ontology mapping, R-GNN, knowledge extraction, large language models, NER, knowlege graph
Procedia PDF Downloads 1612971 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation
Authors: Jonathan Gong
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning
Procedia PDF Downloads 13012970 Gas Phase Extraction: An Environmentally Sustainable and Effective Method for The Extraction and Recovery of Metal from Ores
Authors: Kolela J Nyembwe, Darlington C. Ashiegbu, Herman J. Potgieter
Abstract:
Over the past few decades, the demand for metals has increased significantly. This has led to a decrease and decline of high-grade ore over time and an increase in mineral complexity and matrix heterogeneity. In addition to that, there are rising concerns about greener processes and a sustainable environment. Due to these challenges, the mining and metal industry has been forced to develop new technologies that are able to economically process and recover metallic values from low-grade ores, materials having a metal content locked up in industrially processed residues (tailings and slag), and complex matrix mineral deposits. Several methods to address these issues have been developed, among which are ionic liquids (IL), heap leaching, and bioleaching. Recently, the gas phase extraction technique has been gaining interest because it eliminates many of the problems encountered in conventional mineral processing methods. The technique relies on the formation of volatile metal complexes, which can be removed from the residual solids by a carrier gas. The complexes can then be reduced using the appropriate method to obtain the metal and regenerate-recover the organic extractant. Laboratory work on the gas phase have been conducted for the extraction and recovery of aluminium (Al), iron (Fe), copper (Cu), chrome (Cr), nickel (Ni), lead (Pb), and vanadium V. In all cases the extraction revealed to depend of temperature and mineral surface area. The process technology appears very promising, offers the feasibility of recirculation, organic reagent regeneration, and has the potential to deliver on all promises of a “greener” process.Keywords: gas-phase extraction, hydrometallurgy, low-grade ore, sustainable environment
Procedia PDF Downloads 13312969 Text Localization in Fixed-Layout Documents Using Convolutional Networks in a Coarse-to-Fine Manner
Authors: Beier Zhu, Rui Zhang, Qi Song
Abstract:
Text contained within fixed-layout documents can be of great semantic value and so requires a high localization accuracy, such as ID cards, invoices, cheques, and passports. Recently, algorithms based on deep convolutional networks achieve high performance on text detection tasks. However, for text localization in fixed-layout documents, such algorithms detect word bounding boxes individually, which ignores the layout information. This paper presents a novel architecture built on convolutional neural networks (CNNs). A global text localization network and a regional bounding-box regression network are introduced to tackle the problem in a coarse-to-fine manner. The text localization network simultaneously locates word bounding points, which takes the layout information into account. The bounding-box regression network inputs the features pooled from arbitrarily sized RoIs and refine the localizations. These two networks share their convolutional features and are trained jointly. A typical type of fixed-layout documents: ID cards, is selected to evaluate the effectiveness of the proposed system. These networks are trained on data cropped from nature scene images, and synthetic data produced by a synthetic text generation engine. Experiments show that our approach locates high accuracy word bounding boxes and achieves state-of-the-art performance.Keywords: bounding box regression, convolutional networks, fixed-layout documents, text localization
Procedia PDF Downloads 19412968 Separation of Copper(II) and Iron(III) by Solvent Extraction and Membrane Processes with Ionic Liquids as Carriers
Authors: Beata Pospiech
Abstract:
Separation of metal ions from aqueous solutions is important as well as difficult process in hydrometallurgical technology. This process is necessary for obtaining of clean metals. Solvent extraction and membrane processes are well known as separation methods. Recently, ionic liquids (ILs) are very often applied and studied as extractants and carriers of metal ions from aqueous solutions due to their good extractability properties for various metals. This work discusses a method to separate copper(II) and iron(III) from hydrochloric acid solutions by solvent extraction and transport across polymer inclusion membranes (PIM) with the selected ionic liquids as extractants/ion carriers. Cyphos IL 101 (trihexyl(tetradecyl)phosphonium chloride), Cyphos IL 104 (trihexyl(tetradecyl)phosphonium bis(2,4,4 trimethylpentyl)phosphi-nate), trioctylmethylammonium thiosalicylate [A336][TS] and trihexyl(tetradecyl)phosphonium thiosalicylate [PR4][TS] were used for the investigations. Effect of different parameters such as hydrochloric acid concentration in aqueous phase on iron(III) and copper(II) extraction has been investigated. Cellulose triacetate membranes with the selected ionic liquids as carriers have been prepared and applied for transport of iron(IIII) and copper(II) from hydrochloric acid solutions.Keywords: copper, iron, ionic liquids, solvent extraction
Procedia PDF Downloads 27912967 Study of the Effect of Extraction Solvent on the Content of Total Phenolic, Total Flavonoids and the Antioxidant Activity of an Endemic Medicinal Plant Growing in Morocco
Authors: Aghoutane Basma, Naama Amal, Talbi Hayat, El Manfalouti Hanae, Kartah Badreddine
Abstract:
Aromatic and medicinal plants are used by man for different needs, including food and medicinal needs for their biological properties attributed mainly to phenolic compounds and for their antioxidant capacity. In our study, the aim is to compare three extraction solvents by evaluating the contents of phenolic compounds, the contents of flavonoids, and the antioxidant activities of extracts from different methods of extracting the aerial part of an endemic medicinal plant from Morocco. This activity was also confirmed by three methods (2,2-diphenyl-1-picrylhydrazyl (DPPH), antioxidant reducing power of iron (FRAP), and total antioxidant capacity (CAT)). The results showed that this plant is rich in polyphenols and flavonoids, as well as it has a very important antioxidant capacity in whatever the solvent or the extraction method. This suggests the importance of using extracts from this plant as a new natural source of food additives and potent antioxidants in the food industry.Keywords: endemic plant of Morocco, phenolic compound, solvent, extraction technique, antioxidant activity
Procedia PDF Downloads 29812966 Mutiple Medical Landmark Detection on X-Ray Scan Using Reinforcement Learning
Authors: Vijaya Yuvaram Singh V M, Kameshwar Rao J V
Abstract:
The challenge with development of neural network based methods for medical is the availability of data. Anatomical landmark detection in the medical domain is a process to find points on the x-ray scan report of the patient. Most of the time this task is done manually by trained professionals as it requires precision and domain knowledge. Traditionally object detection based methods are used for landmark detection. Here, we utilize reinforcement learning and query based method to train a single agent capable of detecting multiple landmarks. A deep Q network agent is trained to detect single and multiple landmarks present on hip and shoulder from x-ray scan of a patient. Here a single agent is trained to find multiple landmark making it superior to having individual agents per landmark. For the initial study, five images of different patients are used as the environment and tested the agents performance on two unseen images.Keywords: reinforcement learning, medical landmark detection, multi target detection, deep neural network
Procedia PDF Downloads 14212965 Liquid-Liquid Extraction of Uranium (VI) from Aqueous Solution Using 1-Hydroxyalkylidene-1,1-Diphosphonic Acids
Authors: Mustapha Bouhoun Ali, Ahmed Yacine Badjah Hadj Ahmed, Mouloud Attou, Abdel Hamid Elias, Mohamed Amine Didi
Abstract:
The extraction of uranium(VI) from aqueous solutions has been investigated using 1-hydroxyhexadecylidene-1,1-diphosphonic acid (HHDPA) and 1-hydroxydodecylidene-1,1-diphosphonic acid (HDDPA), which were synthesized and characterized by elemental analysis and by FT-IR, 1H NMR, 31P NMR spectroscopy. In this paper, we propose a tentative assignment for the shifts of those two ligands and their specific complexes with uranium(VI). We carried out the extraction of uranium(VI) by HHDPA and HDDPA from [carbon tetrachloride + 2-octanol (v/v: 90%/10%)] solutions. Various factors such as contact time, pH, organic/aqueous phase ratio and extractant concentration were considered. The optimum conditions obtained were: contact time = 20 min, organic/aqueous phase ratio = 1, pH value = 3.0 and extractant concentration = 0.3M. The extraction yields are more significant in the case of the HHDPA which is equipped with a hydrocarbon chain, longer than that of the HDDPA. Logarithmic plots of the uranium(VI) distribution ratio vs. pHeq and the extractant concentration showed that the ratio of extractant to extracted uranium(VI) (ligand/metal) is 2:1. The formula of the complex of uranium(VI) with the HHDPA and the DHDPA is UO2(H3L)2 (HHDPA and DHDPA are denoted as H4L). A spectroscopic analysis has showed that coordination of uranium(VI) takes place via oxygen atoms.Keywords: liquid-liquid extraction, uranium(VI), 1-hydroxyalkylidene-1, 1-diphosphonic acids, HHDPA, HDDPA, aqueous solution
Procedia PDF Downloads 52812964 Technologies of Isolation and Separation of Anthraquinone Derivatives
Authors: Dmitry Yu. Korulkin, Raissa A. Muzychkina
Abstract:
In review the generalized data about different methods of extraction, separation and purification of natural and modify anthraquinones is presented. The basic regularity of an isolation process is analyzed. Action of temperature, pH, and polarity of extragent, catalysts and other factors on an isolation process is revealed. Procedia PDF Downloads 341