Search results for: waveform feature extraction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3221

Search results for: waveform feature extraction

3101 A Comparative Study on Automatic Feature Classification Methods of Remote Sensing Images

Authors: Lee Jeong Min, Lee Mi Hee, Eo Yang Dam

Abstract:

Geospatial feature extraction is a very important issue in the remote sensing research. In the meantime, the image classification based on statistical techniques, but, in recent years, data mining and machine learning techniques for automated image processing technology is being applied to remote sensing it has focused on improved results generated possibility. In this study, artificial neural network and decision tree technique is applied to classify the high-resolution satellite images, as compared to the MLC processing result is a statistical technique and an analysis of the pros and cons between each of the techniques.

Keywords: remote sensing, artificial neural network, decision tree, maximum likelihood classification

Procedia PDF Downloads 321
3100 AS-Geo: Arbitrary-Sized Image Geolocalization with Learnable Geometric Enhancement Resizer

Authors: Huayuan Lu, Chunfang Yang, Ma Zhu, Baojun Qi, Yaqiong Qiao, Jiangqian Xu

Abstract:

Image geolocalization has great application prospects in fields such as autonomous driving and virtual/augmented reality. In practical application scenarios, the size of the image to be located is not fixed; it is impractical to train different networks for all possible sizes. When its size does not match the size of the input of the descriptor extraction model, existing image geolocalization methods usually directly scale or crop the image in some common ways. This will result in the loss of some information important to the geolocalization task, thus affecting the performance of the image geolocalization method. For example, excessive down-sampling can lead to blurred building contour, and inappropriate cropping can lead to the loss of key semantic elements, resulting in incorrect geolocation results. To address this problem, this paper designs a learnable image resizer and proposes an arbitrary-sized image geolocation method. (1) The designed learnable image resizer employs the self-attention mechanism to enhance the geometric features of the resized image. Firstly, it applies bilinear interpolation to the input image and its feature maps to obtain the initial resized image and the resized feature maps. Then, SKNet (selective kernel net) is used to approximate the best receptive field, thus keeping the geometric shapes as the original image. And SENet (squeeze and extraction net) is used to automatically select the feature maps with strong contour information, enhancing the geometric features. Finally, the enhanced geometric features are fused with the initial resized image, to obtain the final resized images. (2) The proposed image geolocalization method embeds the above image resizer as a fronting layer of the descriptor extraction network. It not only enables the network to be compatible with arbitrary-sized input images but also enhances the geometric features that are crucial to the image geolocalization task. Moreover, the triplet attention mechanism is added after the first convolutional layer of the backbone network to optimize the utilization of geometric elements extracted by the first convolutional layer. Finally, the local features extracted by the backbone network are aggregated to form image descriptors for image geolocalization. The proposed method was evaluated on several mainstream datasets, such as Pittsburgh30K, Tokyo24/7, and Places365. The results show that the proposed method has excellent size compatibility and compares favorably to recently mainstream geolocalization methods.

Keywords: image geolocalization, self-attention mechanism, image resizer, geometric feature

Procedia PDF Downloads 178
3099 Green Delivery Systems for Fruit Polyphenols

Authors: Boris M. Popović, Tatjana Jurić, Bojana Blagojević, Denis Uka, Ružica Ždero Pavlović

Abstract:

Green solvents are environmentally friendly and greatly improve the sustainability of chemical processes. There is a growing interest in the green extraction of polyphenols from fruits. In this study, we consider three Natural Deep Eutectic Solvents (NADES) systems based on choline chloride as a hydrogen bond acceptor and malic acid, urea, and fructose as hydrogen bond donors. NADES systems were prepared by heating and stirring, ultrasound, and microwave (MW) methods. Sour cherry pomace was used as a natural source of polyphenols. Polyphenol extraction from cherry pomace was performed by ultrasound-assisted extraction and microwave-assisted extraction and compared with conventional heat and stirring method extraction. It was found that MW-assisted preparation of NADES was the fastest, requiring less than 30 s. Also, MW extraction of polyphenols was the most rapid, with less than 5 min necessary for the extract preparation. All three NADES systems were highly efficient for anthocyanin extraction, but the most efficient was the system with malic acid as a hydrogen bond donor (yield of anthocyanin content was enhanced by 62.33% after MW extraction with NADES compared with the conventional solvent).

Keywords: anthocyanins, green extraction, NADES, polyphenols

Procedia PDF Downloads 65
3098 Effect of Ultrasound and Enzyme on the Extraction of Eurycoma longifolia (Tongkat Ali)

Authors: He Yuhai, Ahmad Ziad Bin Sulaiman

Abstract:

Tongkat Ali, or Eurycoma longifolia, is a traditional Malay and Orang Asli herb used as aphrodisiac, general tonic, anti-Malaria, and anti-Pyretic. It has been recognized as a cashcrop by Malaysia due to its high value for the pharmaceutical use. In Tongkat Ali, eurycomanone, a quassinoid is usually chosen as a marker phytochemical as it is the most abundant phytochemical. In this research, ultrasound and enzyme were used to enhance the extraction of Eurycomanone from Tongkat Ali. Ultrasonic assisted extraction (USE) enhances extraction by facilitating the swelling and hydration of the plant material, enlarging the plant pores, breaking the plant cell, reducing the plant particle size and creating cavitation bubbles that enhance mass transfer in both the washing and diffusion phase of extraction. Enzyme hydrolyses the cell wall of the plant, loosening the structure of the cell wall, releasing more phytochemicals from the plant cell, enhancing the productivity of the extraction. Possible effects of ultrasound on the activity of the enzyme during the hydrolysis of the cell wall is under the investigation by this research. The extracts was analysed by high performance liquid chromatography for the yields of Eurycomanone. In this whole process, the conventional water extraction was used as a control of comparing the performance of the ultrasound and enzyme assisted extraction.

Keywords: ultrasound, enzymatic, extraction, Eurycoma longifolia

Procedia PDF Downloads 392
3097 Feature Weighting Comparison Based on Clustering Centers in the Detection of Diabetic Retinopathy

Authors: Kemal Polat

Abstract:

In this paper, three feature weighting methods have been used to improve the classification performance of diabetic retinopathy (DR). To classify the diabetic retinopathy, features extracted from the output of several retinal image processing algorithms, such as image-level, lesion-specific and anatomical components, have been used and fed them into the classifier algorithms. The dataset used in this study has been taken from University of California, Irvine (UCI) machine learning repository. Feature weighting methods including the fuzzy c-means clustering based feature weighting, subtractive clustering based feature weighting, and Gaussian mixture clustering based feature weighting, have been used and compered with each other in the classification of DR. After feature weighting, five different classifier algorithms comprising multi-layer perceptron (MLP), k- nearest neighbor (k-NN), decision tree, support vector machine (SVM), and Naïve Bayes have been used. The hybrid method based on combination of subtractive clustering based feature weighting and decision tree classifier has been obtained the classification accuracy of 100% in the screening of DR. These results have demonstrated that the proposed hybrid scheme is very promising in the medical data set classification.

Keywords: machine learning, data weighting, classification, data mining

Procedia PDF Downloads 301
3096 Selection the Most Suitable Method for DNA Extraction from Muscle of Iran's Canned Tuna by Comparison of Different DNA Extraction Methods

Authors: Marjan Heidarzadeh

Abstract:

High quality and purity of DNA isolated from canned tuna is essential for species identification. In this study, the efficiency of five different methods for DNA extraction was compared. Method of national standard in Iran, the CTAB precipitation method, Wizard DNA Clean Up system, Nucleospin and GenomicPrep were employed. DNA was extracted from two different canned tuna in brine and oil of the same tuna species. Three samples of each type of product were analyzed with the different methods. The quantity and quality of DNA extracted was evaluated using the 260 nm absorbance and ratio A260/A280 by spectrophotometer picodrop. Results showed that the DNA extraction from canned tuna preserved in different liquid media could be optimized by employing a specific DNA extraction method in each case. Best results were obtained with CTAB method for canned tuna in oil and with Wizard method for canned tuna in brine.

Keywords: canned tuna PCR, DNA, DNA extraction methods, species identification

Procedia PDF Downloads 627
3095 Solvent Extraction in Ionic Liquids: Structuration and Aggregation Effects on Extraction Mechanisms

Authors: Sandrine Dourdain, Cesar Lopez, Tamir Sukhbaatar, Guilhem Arrachart, Stephane Pellet-Rostaing

Abstract:

A promising challenge in solvent extraction is to replace the conventional organic solvents, with ionic liquids (IL). Depending on the extraction systems, these new solvents show better efficiency than the conventional ones. Although some assumptions based on ions exchanges have been proposed in the literature, these properties are not predictable because the involved mechanisms are still poorly understood. It is well established that the mechanisms underlying solvent extraction processes are based not only on the molecular chelation of the extractant molecules but also on their ability to form supra-molecular aggregates due to their amphiphilic nature. It is therefore essential to evaluate how IL affects the aggregation properties of the extractant molecules. Our aim is to evaluate the influence of IL structure and polarity on solvent extraction mechanisms, by looking at the aggregation of the extractant molecules in IL. We compare extractant systems that are well characterized in common solvents and show thanks to SAXS and SANS measurements, that in the absence of IL ion exchange mechanisms, extraction properties are related to aggregation.

Keywords: solvent extraction in Ionic liquid, aggregation, Ionic liquids structure, SAXS, SANS

Procedia PDF Downloads 127
3094 Extraction of Grapefruit Essential Oil from Grapefruit Peels

Authors: Adithya Subramanian, S. Ananthan, T. Prasanth, S. P. Selvabharathi

Abstract:

This project involves extraction of grapefruit essential oil from grapefruit peels using various oils like castor oil, gingelly oil, olive oil as carrier oils. The main aim of this project is to extract the oil which has numerous medicinal uses. The extraction can be performed by two methods. Project involves extraction of the oil with various carrier oil in a view to reduce the cost of production and the physical properties of the extracted oil are examined.

Keywords: essential oil, carrier oil, medicinal uses, cost of production

Procedia PDF Downloads 401
3093 Image Retrieval Based on Multi-Feature Fusion for Heterogeneous Image Databases

Authors: N. W. U. D. Chathurani, Shlomo Geva, Vinod Chandran, Proboda Rajapaksha

Abstract:

Selecting an appropriate image representation is the most important factor in implementing an effective Content-Based Image Retrieval (CBIR) system. This paper presents a multi-feature fusion approach for efficient CBIR, based on the distance distribution of features and relative feature weights at the time of query processing. It is a simple yet effective approach, which is free from the effect of features' dimensions, ranges, internal feature normalization and the distance measure. This approach can easily be adopted in any feature combination to improve retrieval quality. The proposed approach is empirically evaluated using two benchmark datasets for image classification (a subset of the Corel dataset and Oliva and Torralba) and compared with existing approaches. The performance of the proposed approach is confirmed with the significantly improved performance in comparison with the independently evaluated baseline of the previously proposed feature fusion approaches.

Keywords: feature fusion, image retrieval, membership function, normalization

Procedia PDF Downloads 321
3092 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks

Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone

Abstract:

Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.

Keywords: artificial neural network, data mining, electroencephalogram, epilepsy, feature extraction, seizure detection, signal processing

Procedia PDF Downloads 157
3091 Protein Extraction by Enzyme-Assisted Extraction followed by Alkaline Extraction from Red Seaweed Eucheuma denticulatum (Spinosum) Used in Carrageenan Production

Authors: Alireza Naseri, Susan L. Holdt, Charlotte Jacobsen

Abstract:

In 2014, the global amount of carrageenan production was 60,000 ton with a value of US$ 626 million. From this number, it can be estimated that the total dried seaweed consumption for this production was at least 300,000 ton/year. The protein content of these types of seaweed is 5 – 25%. If just half of this total amount of protein could be extracted, 18,000 ton/year of a high-value protein product would be obtained. The overall aim of this study was to develop a technology that will ensure further utilization of the seaweed that is used only as raw materials for carrageenan production as single extraction at present. More specifically, proteins should be extracted from the seaweed either before or after extraction of carrageenan with focus on maintaining the quality of carrageenan as a main product. Different mechanical, chemical and enzymatic technologies were evaluated. The optimized process was implemented in lab scale and based on its results; the new experiments were done a pilot and larger scale. In order to calculate the efficiency of the new upstream multi-extraction process, protein content was tested before and after extraction. After this step, the extraction of carrageenan was done and carrageenan content and the effect of extraction on yield were evaluated. The functionality and quality of carrageenan were measured based on rheological parameters. The results showed that by using the new multi-extraction process (submitted patent); it is possible to extract almost 50% of total protein without any negative impact on the carrageenan quality. Moreover, compared to the routine carrageenan extraction process, the new multi-extraction process could increase the yield of carrageenan and the rheological properties such as gel strength in the final carrageenan had a promising improvement. The extracted protein has initially been screened as a plant protein source in typical food applications. Further work will be carried out in order to improve properties such as color, solubility, and taste.

Keywords: carrageenan, extraction, protein, seaweed

Procedia PDF Downloads 249
3090 Extraction of M. paradisiaca L. Inflorescences Using Compressed Propane

Authors: Michele C. Mesomo, Madeline de Souza Correa, Roberta L. Kruger, Luis R. S. Kanda, Marcos L. Corazza

Abstract:

Natural extracts of plants have been used for many years for different purposes and recently they have been screened for their potential use as alternative remedies and food preservatives. Inflorescences of M. paradisiaca L., also known as the heart of the banana, have great economic interest due to its fruit. All parts of the banana are used for many different purposes, including use in folk medicine. The use of extraction via supercritical technology has grown in recent years, though it is still necessary to obtain experimental information for the construction of industrial plants. This work reports the extraction of Musa paradisiaca L. using compressed propane as solvent. The effects of the supercritical extraction conditions, pressure and temperature on the yield were evaluated. The raw material, inflorescences banana, was dried at 313.15 K and milled. The particle size used for the packaging of the extraction cell was 12 mesh (23.5%), 16 mesh (23.5%), 32 mesh (34.5%), 48 mesh (18.5%). The extractions were performed in a laboratory scale unit at pressures of 3.0 MPa, 6.5 MPa and 10.0 MPa and at 308.15 K, 323.15 K and 338.15 K. The operating conditions tested achieved a maximum yield of 2.94 wt% for the CO2 extraction at 10.0 MPa and 338.15 K, higher pressure and temperature. The lower yield, 2.29 wt%, was obtained in the condition of lower pressure and higher temperature. Temperature presented significant and positive effect on the extraction yield with supercritical CO2, while pressure had no effect on the yield. The overall extraction curves showed typical behavior obtained for the supercritical extraction procedure and and reached a constant extraction rate of about 80 to 100 min. The largest amount of extract was obtained at the beginning of the process, within 10 to 60 min.

Keywords: banana, natural products, supercritical extraction, temperature

Procedia PDF Downloads 576
3089 Economic Activities Associated with Extraction of Riverbed Materials in the Tinau River, Nepal

Authors: Khet Raj Dahal, Dhruva Dhital, Chhatra Mani Sharma

Abstract:

A study was conducted during 2012 to 2013 in the selected reach of Tinau River, Nepal. The main objective of the study was to quantify employment and income generation from the extraction of construction materials from the river. A 10 km stretch of the river was selected for the study. Sample survey with a semi-structured questionnaire and field observation were the main tools used during field investigation. Extraction of riverbed materials from the banks, beds and floodplain areas of the river has provided many kinds of job opportunities for the people living in the vicinity of the river. It has also generated an adequate amount of revenues. The collected revenue has been invested for many kinds of social and infrastructures development for years. Though extraction of riverbed materials is beneficial for income and employment generation, it has also negative environmental impacts in and around the river. Furthermore, the study concluded that river bed extraction should be continued with special monitoring and evaluation in the areas where there is still room for extraction.

Keywords: extraction, crusher plants, economic activities, Tinau River

Procedia PDF Downloads 670
3088 Modeling and Optimization of Algae Oil Extraction Using Response Surface Methodology

Authors: I. F. Ejim, F. L. Kamen

Abstract:

Aims: In this experiment, algae oil extraction with a combination of n-hexane and ethanol was investigated. The effects of extraction solvent concentration, extraction time and temperature on the yield and quality of oil were studied using Response Surface Methodology (RSM). Experimental Design: Optimization of algae oil extraction using Box-Behnken design was used to generate 17 experimental runs in a three-factor-three-level design where oil yield, specific gravity, acid value and saponification value were evaluated as the response. Result: In this result, a minimum oil yield of 17% and maximum of 44% was realized. The optimum values for yield, specific gravity, acid value and saponification value from the overlay plot were 40.79%, 0.8788, 0.5056 mg KOH/g and 180.78 mg KOH/g respectively with desirability of 0.801. The maximum point prediction was yield 40.79% at solvent concentration 66.68 n-hexane, temperature of 40.0°C and extraction time of 4 hrs. Analysis of Variance (ANOVA) results showed that the linear and quadratic coefficient were all significant at p<0.05. The experiment was validated and results obtained were with the predicted values. Conclusion: Algae oil extraction was successfully optimized using RSM and its quality indicated it is suitable for many industrial uses.

Keywords: algae oil, response surface methodology, optimization, Box-Bohnken, extraction

Procedia PDF Downloads 303
3087 Liquid-Liquid Extraction of Rare Earths Elements by Use of Ionic Liquids

Authors: C. Lopez, S. Dourdain, G. Arrachart, S. Pellet-Rostaing

Abstract:

Ionic liquids (ILs) are considered a good alternative for organic solvents in extractive processes; however, the higher or lower extraction efficiency in ILs remains difficult to predict because a lack of understanding of the extraction mechanisms in this class of diluents, making their application difficult to generalize. We have studied the extraction behavior of La(III) and Eu(III) from aqueous solution into n-dodecane and two ionic liquids (ILs), 1-ethyl-1-butylpiperidinium bis (trifluoromethylsulfonyl)imide [EBPip⁺] [NTf₂⁻] and 1-ethyl-1-octylpiperidinium bis (trifluoromethylsulfonyl)imide [EOPip⁺] [NTf₂⁻], at room temperature using N,N’- dimethyl- N,N’-dioctylhexylethoxymalonamide (DMDOHEMA) as extractant. Fe(III) was introduced to the aqueous phase in order to study the selectivity toward La(III) and Eu(III) and the effect of variation of PH was investigated by using of several HNO₃ concentrations. We found that the ionic liquid with shorter alkyl chain [EBPip⁺] [NTf₂⁻] showed a higher extraction ability than [EOPip⁺] [NTf₂⁻] and that the use of ILs as organic solvent instead n-dodecane, greatly enhanced the extraction percentage of the target metals with a good selectivity. Cation ([EBPip⁺] or [EOPip⁺]) and anion ([NTf₂⁻]) concentration in the aqueous phase, has been determined in order to elucidate the extraction mechanism.

Keywords: extraction mechanism, ionic liquids, rare earths elements, solvent extraction

Procedia PDF Downloads 87
3086 The Solvent Extraction of Uranium, Plutonium and Thorium from Aqueous Solution by 1-Hydroxyhexadecylidene-1,1-Diphosphonic Acid

Authors: M. Bouhoun Ali, A. Y. Badjah Hadj Ahmed, M. Attou, A. Elias, M. A. Didi

Abstract:

In this paper, the solvent extraction of uranium(VI), plutonium(IV) and thorium(IV) from aqueous solutions using 1-hydroxyhexadecylidene-1,1-diphosphonic acid (HHDPA) in treated kerosene has been investigated. The HHDPA was previously synthesized and characterized by FT-IR, 1H NMR, 31P NMR spectroscopy and elemental analysis. The effects contact time, initial pH, initial metal concentration, aqueous/organic phase ratio, extractant concentration and temperature on the extraction process have been studied. An empirical modelling was performed by using a 25 full factorial design, and regression equation for extraction metals was determined from the data. The conventional log-log analysis of the extraction data reveals that ratios of extractant to extracted U(VI), Pu(IV) and Th(IV) are 1:1, 1:2 and 1:2, respectively. Thermodynamic parameters showed that the extraction process was exothermic heat and spontaneous. The obtained optimal parameters were applied to real effluents containing uranium(VI), plutonium(IV) and thorium(IV) ions.

Keywords: solvent extraction, uranium, plutonium, thorium, 1-hydroxyhexadecylidene-1-1-diphosphonic acid, aqueous solution

Procedia PDF Downloads 251
3085 A Quantitative Evaluation of Text Feature Selection Methods

Authors: B. S. Harish, M. B. Revanasiddappa

Abstract:

Due to rapid growth of text documents in digital form, automated text classification has become an important research in the last two decades. The major challenge of text document representations are high dimension, sparsity, volume and semantics. Since the terms are only features that can be found in documents, selection of good terms (features) plays an very important role. In text classification, feature selection is a strategy that can be used to improve classification effectiveness, computational efficiency and accuracy. In this paper, we present a quantitative analysis of most widely used feature selection (FS) methods, viz. Term Frequency-Inverse Document Frequency (tfidf ), Mutual Information (MI), Information Gain (IG), CHISquare (x2), Term Frequency-Relevance Frequency (tfrf ), Term Strength (TS), Ambiguity Measure (AM) and Symbolic Feature Selection (SFS) to classify text documents. We evaluated all the feature selection methods on standard datasets like 20 Newsgroups, 4 University dataset and Reuters-21578.

Keywords: classifiers, feature selection, text classification

Procedia PDF Downloads 423
3084 Image Analysis for Obturator Foramen Based on Marker-controlled Watershed Segmentation and Zernike Moments

Authors: Seda Sahin, Emin Akata

Abstract:

Obturator foramen is a specific structure in pelvic bone images and recognition of it is a new concept in medical image processing. Moreover, segmentation of bone structures such as obturator foramen plays an essential role for clinical research in orthopedics. In this paper, we present a novel method to analyze the similarity between the substructures of the imaged region and a hand drawn template, on hip radiographs to detect obturator foramen accurately with integrated usage of Marker-controlled Watershed segmentation and Zernike moment feature descriptor. Marker-controlled Watershed segmentation is applied to seperate obturator foramen from the background effectively. Zernike moment feature descriptor is used to provide matching between binary template image and the segmented binary image for obturator foramens for final extraction. The proposed method is tested on randomly selected 100 hip radiographs. The experimental results represent that our method is able to segment obturator foramens with % 96 accuracy.

Keywords: medical image analysis, segmentation of bone structures on hip radiographs, marker-controlled watershed segmentation, zernike moment feature descriptor

Procedia PDF Downloads 401
3083 Extraction of Aromatic Hydrocarbons from Lub Oil Using Sursurfactant as Additive

Authors: Izza Hidaya, Korichi Mourad

Abstract:

Solvent extraction is an affective method for reduction of aromatic content of lube oil. Frequently with phenol, furfural, NMP(N-methyl pyrrolidone). The solvent power and selectivity can be further increased by using surfactant as additive which facilitate phase separation and to increase raffinate yield. The aromatics in lube oil were extracted at different temperatures (ranging from 333.15 to 343.15K) and different concentration of surfactant (ranging from 0.01 to 0.1% wt).The extraction temperature and the amount of sulfate lauryl éther de sodium In phenoll were investigated systematically in order to determine their optimum values. The amounts of aromatic, paraffinic and naphthenic compounds were determined using ASTM standards by measuring refractive index (RI), viscosity, molecular weight and sulfur content. It was found that using 0,01%wt. surfactant at 343.15K yields the optimum extraction conditions.

Keywords: extraction, lubricating oil, aromatics, hydrocarbons

Procedia PDF Downloads 493
3082 Protein and Lipid Extraction from Microalgae with Ultrasound Assisted Osmotic Shock Method

Authors: Nais Pinta Adetya, H. Hadiyanto

Abstract:

Microalgae has a potential to be utilized as food and natural colorant. The microalgae components consists of three main parts, these are lipid, protein, and carbohydrate. Crucial step in producing lipid and protein from microalgae is extraction. Microalgae has high water level (70-90%), it causes drying process of biomass needs much more energy and also has potential to distract lipid and protein from microalgae. Extraction of lipid from wet biomass is able to take place efficiently with cell disruption of microalgae by osmotic shock method. In this study, osmotic shock method was going to be integrated with ultrasound to maximalize the extraction yield of lipid and protein from wet biomass Spirulina sp. with osmotic shock method assisted ultrasound. This study consisted of two steps, these were osmotic shock process toward wet biomass and ultrasound extraction assisted. NaCl solution was used as osmotic agent, with the variation of concentrations were 10%, 20%, and 30%. Extraction was conducted in 40°C for 20 minutes with frequency of ultrasound wave was 40kHz. The optimal yield of protein (2.7%) and (lipid 38%) were achieved at 20% osmotic agent concentration.

Keywords: extraction, lipid, osmotic shock, protein, ultrasound

Procedia PDF Downloads 324
3081 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization

Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon

Abstract:

The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.

Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization

Procedia PDF Downloads 416
3080 A Research and Application of Feature Selection Based on IWO and Tabu Search

Authors: Laicheng Cao, Xiangqian Su, Youxiao Wu

Abstract:

Feature selection is one of the important problems in network security, pattern recognition, data mining and other fields. In order to remove redundant features, effectively improve the detection speed of intrusion detection system, proposes a new feature selection method, which is based on the invasive weed optimization (IWO) algorithm and tabu search algorithm(TS). Use IWO as a global search, tabu search algorithm for local search, to improve the results of IWO algorithm. The experimental results show that the feature selection method can effectively remove the redundant features of network data information in feature selection, reduction time, and to guarantee accurate detection rate, effectively improve the speed of detection system.

Keywords: intrusion detection, feature selection, iwo, tabu search

Procedia PDF Downloads 499
3079 Microwave Accelerated Simultaneous Distillation –Extraction: Preparative Recovery of Volatiles from Food Products

Authors: Ferhat Mohamed, Boukhatem Mohamed Nadjib, Chemat Farid

Abstract:

Simultaneous distillation–extraction (SDE) is routinely used by analysts for sample preparation prior to gas chromatography analysis. In this work, a new process design and operation for microwave assisted simultaneous distillation – solvent extraction (MW-SDE) of volatile compounds was developed. Using the proposed method, isolation, extraction and concentration of volatile compounds can be carried out in a single step. To demonstrate its feasibility, MW-SDE was compared with the conventional technique, Simultaneous distillation–extraction (SDE), for gas chromatography-mass spectrometry (GC-MS) analysis of volatile compounds in a fresh orange juice and a dry spice “carvi seeds”. SDE method required long time (3 h) to isolate the volatile compounds, and large amount of organic solvent (200 mL of hexane) for further extraction, while MW-SDE needed little time (only 30 min) to prepare sample, and less amount of organic solvent (10 mL of hexane). These results show that MW-SDE–GC-MS is a simple, rapid and solvent-less method for determination of volatile compounds from aromatic plants.

Keywords: essential oil, extraction, distillation, carvi seeds

Procedia PDF Downloads 534
3078 Exploring Syntactic and Semantic Features for Text-Based Authorship Attribution

Authors: Haiyan Wu, Ying Liu, Shaoyun Shi

Abstract:

Authorship attribution is to extract features to identify authors of anonymous documents. Many previous works on authorship attribution focus on statistical style features (e.g., sentence/word length), content features (e.g., frequent words, n-grams). Modeling these features by regression or some transparent machine learning methods gives a portrait of the authors' writing style. But these methods do not capture the syntactic (e.g., dependency relationship) or semantic (e.g., topics) information. In recent years, some researchers model syntactic trees or latent semantic information by neural networks. However, few works take them together. Besides, predictions by neural networks are difficult to explain, which is vital in authorship attribution tasks. In this paper, we not only utilize the statistical style and content features but also take advantage of both syntactic and semantic features. Different from an end-to-end neural model, feature selection and prediction are two steps in our method. An attentive n-gram network is utilized to select useful features, and logistic regression is applied to give prediction and understandable representation of writing style. Experiments show that our extracted features can improve the state-of-the-art methods on three benchmark datasets.

Keywords: authorship attribution, attention mechanism, syntactic feature, feature extraction

Procedia PDF Downloads 104
3077 Fractionation of Biosynthetic Mixture of Gentamicins by Reactive Extraction

Authors: L. Kloetzer, M. Poştaru, A. I. Galaction, D. Caşcaval

Abstract:

Gentamicin is an aminoglycoside antibiotic industrially obtained by biosynthesis of Micromonospora purpurea or echinospora, the product being a complex mixture of components with very similar structures. Among them, three exhibit the most important biological activity: gentamicins C1, C1a, C2, and C2a. The separation of gentamicin from the fermentation broths at industrial scale is rather difficult and it does not allow the fractionation of the complex mixture of gentamicins in order to increase the therapeutic activity of the product. The aim of our experiments is to analyze the possibility to selectively separate the less active gentamicin, namely gentamicin C1, from the biosynthetic mixture by reactive extraction with di-(2-ethylhexyl) phosphoric acid (D2EHPA) dissolved in dichloromethane, followed selective re-extraction of the most active gentamicins C1a, C2, and C2a. The experiments on the reactive extraction of gentamicins indicated the possibility to separate selectively the gentamicin C1 from the mixture obtained by biosynthesis. The extraction selectivity is positively influenced by increasing the pH-value of an aqueous solution and by using a D2EHPA concentration in organic phase closer to the value needed for an equimolecular ratio between the extractant and this gentamicin. For quantifying the selectivity of separation, the selectivity factor, calculated as the ratio between the degree of reactive extraction of gentamicin C1 and the overall extraction degree of gentamicins were used. The possibility to remove the gentamicin C1 at an extractant concentration of 10 g l-1 and pH = 8 is presented. In these conditions, it was obtained the maximum value of the selectivity factor of 2.14, which corresponds to the modification of the gentamicin C1 concentration from 31.92% in the biosynthetic mixture to 72% in the extract. The re-extraction of gentamicins C1, C1a, C2, and C2a with sulfuric acid from the extract previously obtained by reactive extraction (mixture A – extract obtained by non-selective reactive extraction; mixture B – extract obtained by selective reactive extraction) allows for separating selectively the most active gentamicins C1a, C2, and C2a. For recovering only the active gentamicins C1a, C2, and C2a, the re-extraction must be carried out at very low acid concentrations, far below those corresponding to the stoichiometry of its chemical reactions with these gentamicins. Therefore, the mixture resulted by re-extraction contained 92.6% gentamicins C1a, C2, and C2a. By bringing together the aqueous solutions obtained by reactive extraction and re-extraction, the overall content of the active gentamicins in the final product becomes 89%, their loss reaching 0.3% related to the initial biosynthetic product.

Keywords: di-(2-ethylhexyl) phosphoric acid, gentamicin, reactive extraction, selectivity factor

Procedia PDF Downloads 289
3076 Event Extraction, Analysis, and Event Linking

Authors: Anam Alam, Rahim Jamaluddin Kanji

Abstract:

With the rapid growth of event in everywhere, event extraction has now become an important matter to retrieve the information from the unstructured data. One of the challenging problems is to extract the event from it. An event is an observable occurrence of interaction among entities. The paper investigates the effectiveness of event extraction capabilities of three software tools that are Wandora, Nitro and SPSS. We performed standard text mining techniques of these tools on the data sets of (i) Afghan War Diaries (AWD collection), (ii) MUC4 and (iii) WebKB. Information retrieval measures such as precision and recall which are computed under extensive set of experiments for Event Extraction. The experimental study analyzes the difference between events extracted by the software and human. This approach helps to construct an algorithm that will be applied for different machine learning methods.

Keywords: event extraction, Wandora, nitro, SPSS, event analysis, extraction method, AFG, Afghan War Diaries, MUC4, 4 universities, dataset, algorithm, precision, recall, evaluation

Procedia PDF Downloads 556
3075 Optimization Based Extreme Learning Machine for Watermarking of an Image in DWT Domain

Authors: RAM PAL SINGH, VIKASH CHAUDHARY, MONIKA VERMA

Abstract:

In this paper, we proposed the implementation of optimization based Extreme Learning Machine (ELM) for watermarking of B-channel of color image in discrete wavelet transform (DWT) domain. ELM, a regularization algorithm, works based on generalized single-hidden-layer feed-forward neural networks (SLFNs). However, hidden layer parameters, generally called feature mapping in context of ELM need not to be tuned every time. This paper shows the embedding and extraction processes of watermark with the help of ELM and results are compared with already used machine learning models for watermarking.Here, a cover image is divide into suitable numbers of non-overlapping blocks of required size and DWT is applied to each block to be transformed in low frequency sub-band domain. Basically, ELM gives a unified leaning platform with a feature mapping, that is, mapping between hidden layer and output layer of SLFNs, is tried for watermark embedding and extraction purpose in a cover image. Although ELM has widespread application right from binary classification, multiclass classification to regression and function estimation etc. Unlike SVM based algorithm which achieve suboptimal solution with high computational complexity, ELM can provide better generalization performance results with very small complexity. Efficacy of optimization method based ELM algorithm is measured by using quantitative and qualitative parameters on a watermarked image even though image is subjected to different types of geometrical and conventional attacks.

Keywords: BER, DWT, extreme leaning machine (ELM), PSNR

Procedia PDF Downloads 283
3074 USE-Net: SE-Block Enhanced U-Net Architecture for Robust Speaker Identification

Authors: Kilari Nikhil, Ankur Tibrewal, Srinivas Kruthiventi S. S.

Abstract:

Conventional speaker identification systems often fall short of capturing the diverse variations present in speech data due to fixed-scale architectures. In this research, we propose a CNN-based architecture, USENet, designed to overcome these limitations. Leveraging two key techniques, our approach achieves superior performance on the VoxCeleb 1 Dataset without any pre-training. Firstly, we adopt a U-net-inspired design to extract features at multiple scales, empowering our model to capture speech characteristics effectively. Secondly, we introduce the squeeze and excitation block to enhance spatial feature learning. The proposed architecture showcases significant advancements in speaker identification, outperforming existing methods, and holds promise for future research in this domain.

Keywords: multi-scale feature extraction, squeeze and excitation, VoxCeleb1 speaker identification, mel-spectrograms, USENet

Procedia PDF Downloads 43
3073 Automatic Threshold Search for Heat Map Based Feature Selection: A Cancer Dataset Analysis

Authors: Carlos Huertas, Reyes Juarez-Ramirez

Abstract:

Public health is one of the most critical issues today; therefore, there is great interest to improve technologies in the area of diseases detection. With machine learning and feature selection, it has been possible to aid the diagnosis of several diseases such as cancer. In this work, we present an extension to the Heat Map Based Feature Selection algorithm, this modification allows automatic threshold parameter selection that helps to improve the generalization performance of high dimensional data such as mass spectrometry. We have performed a comparison analysis using multiple cancer datasets and compare against the well known Recursive Feature Elimination algorithm and our original proposal, the results show improved classification performance that is very competitive against current techniques.

Keywords: biomarker discovery, cancer, feature selection, mass spectrometry

Procedia PDF Downloads 300
3072 Words Spotting in the Images Handwritten Historical Documents

Authors: Issam Ben Jami

Abstract:

Information retrieval in digital libraries is very important because most famous historical documents occupy a significant value. The word spotting in historical documents is a very difficult notion, because automatic recognition of such documents is naturally cursive, it represents a wide variability in the level scale and translation words in the same documents. We first present a system for the automatic recognition, based on the extraction of interest points words from the image model. The extraction phase of the key points is chosen from the representation of the image as a synthetic description of the shape recognition in a multidimensional space. As a result, we use advanced methods that can find and describe interesting points invariant to scale, rotation and lighting which are linked to local configurations of pixels. We test this approach on documents of the 15th century. Our experiments give important results.

Keywords: feature matching, historical documents, pattern recognition, word spotting

Procedia PDF Downloads 246