Search results for: feature extraction method for tremor classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22356

Search results for: feature extraction method for tremor classification

22056 Solvent Extraction, Spectrophotometric Determination of Antimony(III) from Real Samples and Synthetic Mixtures Using O-Methylphenyl Thiourea as a Sensitive Reagent

Authors: Shashikant R. Kuchekar, Shivaji D. Pulate, Vishwas B. Gaikwad

Abstract:

A simple and selective method is developed for solvent extraction spectrophotometric determination of antimony(III) using O-Methylphenyl Thiourea (OMPT) as a sensitive chromogenic chelating agent. The basis of proposed method is formation of antimony(III)-OMPT complex was extracted with 0.0025 M OMPT in chloroform from aqueous solution of antimony(III) in 1.0 M perchloric acid. The absorbance of this complex was measured at 297 nm against reagent blank. Beer’s law was obeyed up to 15µg mL-1 of antimony(III). The Molar absorptivity and Sandell’s sensitivity of the antimony(III)-OMPT complex in chloroform are 16.6730 × 103 L mol-1 cm-1 and 0.00730282 µg cm-2 respectively. The stoichiometry of antimony(III)-OMPT complex was established from slope ratio method, mole ratio method and Job’s continuous variation method was 1:2. The complex was stable for more than 48 h. The interfering effect of various foreign ions was studied and suitable masking agents are used wherever necessary to enhance selectivity of the method. The proposed method is successfully applied for determination of antimony(III) from real samples alloy and synthetic mixtures. Repetition of the method was checked by finding relative standard deviation (RSD) for 10 determinations which was 0.42%.

Keywords: solvent extraction, antimony, spectrophotometry, real sample analysis

Procedia PDF Downloads 332
22055 Composite Approach to Extremism and Terrorism Web Content Classification

Authors: Kolade Olawande Owoeye, George Weir

Abstract:

Terrorism and extremism activities on the internet are becoming the most significant threats to national security because of their potential dangers. In response to this challenge, law enforcement and security authorities are actively implementing comprehensive measures by countering the use of the internet for terrorism. To achieve the measures, there is need for intelligence gathering via the internet. This includes real-time monitoring of potential websites that are used for recruitment and information dissemination among other operations by extremist groups. However, with billions of active webpages, real-time monitoring of all webpages become almost impossible. To narrow down the search domain, there is a need for efficient webpage classification techniques. This research proposed a new approach tagged: SentiPosit-based method. SentiPosit-based method combines features of the Posit-based method and the Sentistrenght-based method for classification of terrorism and extremism webpages. The experiment was carried out on 7500 webpages obtained through TENE-webcrawler by International Cyber Crime Research Centre (ICCRC). The webpages were manually grouped into three classes which include the ‘pro-extremist’, ‘anti-extremist’ and ‘neutral’ with 2500 webpages in each category. A supervised learning algorithm is then applied on the classified dataset in order to build the model. Results obtained was compared with existing classification method using the prediction accuracy and runtime. It was observed that our proposed hybrid approach produced a better classification accuracy compared to existing approaches within a reasonable runtime.

Keywords: sentiposit, classification, extremism, terrorism

Procedia PDF Downloads 278
22054 Comparison of Microwave-Assisted and Conventional Leaching for Extraction of Copper from Chalcopyrite Concentrate

Authors: Ayfer Kilicarslan, Kubra Onol, Sercan Basit, Muhlis Nezihi Saridede

Abstract:

Chalcopyrite (CuFeS2) is the most common primary mineral used for the commercial production of copper. The low dissolution efficiency of chalcopyrite in sulfate media has prevented an efficient industrial leaching of this mineral in sulfate media. Ferric ions, bacteria, oxygen and other oxidants have been used as oxidizing agents in the leaching of chalcopyrite in sulfate and chloride media under atmospheric or pressure leaching conditions. Two leaching methods were studied to evaluate chalcopyrite (CuFeS2) dissolution in acid media. First, the conventional oxidative acid leaching method was carried out using sulfuric acid (H2SO4) and potassium dichromate (K2Cr2O7) as oxidant at atmospheric pressure. Second, microwave-assisted acid leaching was performed using the microwave accelerated reaction system (MARS) for same reaction media. Parameters affecting the copper extraction such as leaching time, leaching temperature, concentration of H2SO4 and concentration of K2Cr2O7 were investigated. The results of conventional acid leaching experiments were compared to the microwave leaching method. It was found that the copper extraction obtained under high temperature and high concentrations of oxidant with microwave leaching is higher than those obtained conventionally. 81% copper extraction was obtained by the conventional oxidative acid leaching method in 180 min, with the concentration of 0.3 mol/L K2Cr2O7 in 0.5M H2SO4 at 50 ºC, while 93.5% copper extraction was obtained in 60 min with microwave leaching method under same conditions.

Keywords: extraction, copper, microwave-assisted leaching, chalcopyrite, potassium dichromate

Procedia PDF Downloads 370
22053 Comparison of Linear Discriminant Analysis and Support Vector Machine Classifications for Electromyography Signals Acquired at Five Positions of Elbow Joint

Authors: Amna Khan, Zareena Kausar, Saad Malik

Abstract:

Bio Mechatronics has extended applications in the field of rehabilitation. It has been contributing since World War II in improving the applicability of prosthesis and assistive devices in real life scenarios. In this paper, classification accuracies have been compared for two classifiers against five positions of elbow. Electromyography (EMG) signals analysis have been acquired directly from skeletal muscles of human forearm for each of the three defined positions and at modified extreme positions of elbow flexion and extension using 8 electrode Myo armband sensor. Features were extracted from filtered EMG signals for each position. Performance of two classifiers, support vector machine (SVM) and linear discriminant analysis (LDA) has been compared by analyzing the classification accuracies. SVM illustrated classification accuracies between 90-96%, in contrast to 84-87% depicted by LDA for five defined positions of elbow keeping the number of samples and selected feature the same for both SVM and LDA.

Keywords: classification accuracies, electromyography, linear discriminant analysis (LDA), Myo armband sensor, support vector machine (SVM)

Procedia PDF Downloads 368
22052 Review and Analysis of Parkinson's Tremor Genesis Using Mathematical Model

Authors: Pawan Kumar Gupta, Sumana Ghosh

Abstract:

Parkinson's Disease (PD) is a long-term neurodegenerative movement disorder of the central nervous system with vast symptoms related to the motor system. The common symptoms of PD are tremor, rigidity, bradykinesia/akinesia, and postural instability, but the clinical symptom includes other motor and non‐motor issues. The motor symptoms of the disease are consequence of death of the neurons in a region of the midbrain known as substantia nigra pars compacta, leading to decreased level of a neurotransmitter known as dopamine. The cause of this neuron death is not clearly known but involves formation of Lewy bodies, an abnormal aggregation or clumping of the protein alpha-synuclein in the neurons. Unfortunately, there is no cure for PD, and the management of this disease is challenging. Therefore, it is critical for a patient to be diagnosed at early stages. A limited choice of drugs is available to improve the symptoms, but those become less and less effective over time. Apart from that, with rapid growth in the field of science and technology, other methods such as multi-area brain stimulation are used to treat patients. In order to develop advanced techniques and to support drug development for treating PD patients, an accurate mathematical model is needed to explain the underlying relationship of dopamine secretion in the brain with the hand tremors. There has been a lot of effort in the past few decades on modeling PD tremors and treatment effects from a computational point of view. These models can effectively save time as well as the cost of drug development for the pharmaceutical industry and be helpful for selecting appropriate treatment mechanisms among all possible options. In this review paper, an effort is made to investigate studies on PD modeling and analysis and to highlight some of the key advances in the field over the past centuries with discussion on the current challenges.

Keywords: Parkinson's disease, deep brain stimulation, tremor, modeling

Procedia PDF Downloads 140
22051 Causal Relation Identification Using Convolutional Neural Networks and Knowledge Based Features

Authors: Tharini N. de Silva, Xiao Zhibo, Zhao Rui, Mao Kezhi

Abstract:

Causal relation identification is a crucial task in information extraction and knowledge discovery. In this work, we present two approaches to causal relation identification. The first is a classification model trained on a set of knowledge-based features. The second is a deep learning based approach training a model using convolutional neural networks to classify causal relations. We experiment with several different convolutional neural networks (CNN) models based on previous work on relation extraction as well as our own research. Our models are able to identify both explicit and implicit causal relations as well as the direction of the causal relation. The results of our experiments show a higher accuracy than previously achieved for causal relation identification tasks.

Keywords: causal realtion extraction, relation extracton, convolutional neural network, text representation

Procedia PDF Downloads 732
22050 Integration of Educational Data Mining Models to a Web-Based Support System for Predicting High School Student Performance

Authors: Sokkhey Phauk, Takeo Okazaki

Abstract:

The challenging task in educational institutions is to maximize the high performance of students and minimize the failure rate of poor-performing students. An effective method to leverage this task is to know student learning patterns with highly influencing factors and get an early prediction of student learning outcomes at the timely stage for setting up policies for improvement. Educational data mining (EDM) is an emerging disciplinary field of data mining, statistics, and machine learning concerned with extracting useful knowledge and information for the sake of improvement and development in the education environment. The study is of this work is to propose techniques in EDM and integrate it into a web-based system for predicting poor-performing students. A comparative study of prediction models is conducted. Subsequently, high performing models are developed to get higher performance. The hybrid random forest (Hybrid RF) produces the most successful classification. For the context of intervention and improving the learning outcomes, a feature selection method MICHI, which is the combination of mutual information (MI) and chi-square (CHI) algorithms based on the ranked feature scores, is introduced to select a dominant feature set that improves the performance of prediction and uses the obtained dominant set as information for intervention. By using the proposed techniques of EDM, an academic performance prediction system (APPS) is subsequently developed for educational stockholders to get an early prediction of student learning outcomes for timely intervention. Experimental outcomes and evaluation surveys report the effectiveness and usefulness of the developed system. The system is used to help educational stakeholders and related individuals for intervening and improving student performance.

Keywords: academic performance prediction system, educational data mining, dominant factors, feature selection method, prediction model, student performance

Procedia PDF Downloads 106
22049 Colored Image Classification Using Quantum Convolutional Neural Networks Approach

Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins

Abstract:

Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.

Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning

Procedia PDF Downloads 129
22048 Detection of Phoneme [S] Mispronounciation for Sigmatism Diagnosis in Adults

Authors: Michal Krecichwost, Zauzanna Miodonska, Pawel Badura

Abstract:

The diagnosis of sigmatism is mostly based on the observation of articulatory organs. It is, however, not always possible to precisely observe the vocal apparatus, in particular in the oral cavity of the patient. Speech processing can allow to objectify the therapy and simplify the verification of its progress. In the described study the methodology for classification of incorrectly pronounced phoneme [s] is proposed. The recordings come from adults. They were registered with the speech recorder at the sampling rate of 44.1 kHz and the resolution of 16 bit. The database of pathological and normative speech has been collected for the study including reference assessments provided by the speech therapy experts. Ten adult subjects were asked to simulate a certain type of stigmatism under the speech therapy expert supervision. In the recordings, the analyzed phone [s] was surrounded by vowels, viz: ASA, ESE, ISI, SPA, USU, YSY. Thirteen MFCC (mel-frequency cepstral coefficients) and RMS (root mean square) values are calculated within each frame being a part of the analyzed phoneme. Additionally, 3 fricative formants along with corresponding amplitudes are determined for the entire segment. In order to aggregate the information within the segment, the average value of each MFCC coefficient is calculated. All features of other types are aggregated by means of their 75th percentile. The proposed method of features aggregation reduces the size of the feature vector used in the classification. Binary SVM (support vector machine) classifier is employed at the phoneme recognition stage. The first group consists of pathological phones, while the other of the normative ones. The proposed feature vector yields classification sensitivity and specificity measures above 90% level in case of individual logo phones. The employment of a fricative formants-based information improves the sole-MFCC classification results average of 5 percentage points. The study shows that the employment of specific parameters for the selected phones improves the efficiency of pathology detection referred to the traditional methods of speech signal parameterization.

Keywords: computer-aided pronunciation evaluation, sibilants, sigmatism diagnosis, speech processing

Procedia PDF Downloads 283
22047 A New Approach to Image Stitching of Radiographic Images

Authors: Somaya Adwan, Rasha Majed, Lamya'a Majed, Hamzah Arof

Abstract:

In order to produce images with whole body parts, X-ray of different portions of the body parts is assembled using image stitching methods. A new method for image stitching that exploits mutually feature based method and direct based method to identify and merge pairs of X-ray medical images is presented in this paper. The performance of the proposed method based on this hybrid approach is investigated in this paper. The ability of the proposed method to stitch and merge the overlapping pairs of images is demonstrated. Our proposed method display comparable if not superior performance to other feature based methods that are mentioned in the literature on the standard databases. These results are promising and demonstrate the potential of the proposed method for further development to tackle more advanced stitching problems.

Keywords: image stitching, direct based method, panoramic image, X-ray

Procedia PDF Downloads 541
22046 Secure Image Retrieval Based on Orthogonal Decomposition under Cloud Environment

Authors: Y. Xu, L. Xiong, Z. Xu

Abstract:

In order to protect data privacy, image with sensitive or private information needs to be encrypted before being outsourced to the cloud. However, this causes difficulties in image retrieval and data management. A secure image retrieval method based on orthogonal decomposition is proposed in the paper. The image is divided into two different components, for which encryption and feature extraction are executed separately. As a result, cloud server can extract features from an encrypted image directly and compare them with the features of the queried images, so that the user can thus obtain the image. Different from other methods, the proposed method has no special requirements to encryption algorithms. Experimental results prove that the proposed method can achieve better security and better retrieval precision.

Keywords: secure image retrieval, secure search, orthogonal decomposition, secure cloud computing

Procedia PDF Downloads 484
22045 Classification of Construction Projects

Authors: M. Safa, A. Sabet, S. MacGillivray, M. Davidson, K. Kaczmarczyk, C. T. Haas, G. E. Gibson, D. Rayside

Abstract:

To address construction project requirements and specifications, scholars and practitioners need to establish a taxonomy according to a scheme that best fits their need. While existing characterization methods are continuously being improved, new ones are devised to cover project properties which have not been previously addressed. One such method, the Project Definition Rating Index (PDRI), has received limited consideration strictly as a classification scheme. Developed by the Construction Industry Institute (CII) in 1996, the PDRI has been refined over the last two decades as a method for evaluating a project's scope definition completeness during front-end planning (FEP). The main contribution of this study is a review of practical project classification methods, and a discussion of how PDRI can be used to classify projects based on their readiness in the FEP phase. The proposed model has been applied to 59 construction projects in Ontario, and the results are discussed.

Keywords: project classification, project definition rating index (PDRI), risk, project goals alignment

Procedia PDF Downloads 678
22044 Comparison of the Effectiveness of Tree Algorithms in Classification of Spongy Tissue Texture

Authors: Roza Dzierzak, Waldemar Wojcik, Piotr Kacejko

Abstract:

Analysis of the texture of medical images consists of determining the parameters and characteristics of the examined tissue. The main goal is to assign the analyzed area to one of two basic groups: as a healthy tissue or a tissue with pathological changes. The CT images of the thoracic lumbar spine from 15 healthy patients and 15 with confirmed osteoporosis were used for the analysis. As a result, 120 samples with dimensions of 50x50 pixels were obtained. The set of features has been obtained based on the histogram, gradient, run-length matrix, co-occurrence matrix, autoregressive model, and Haar wavelet. As a result of the image analysis, 290 descriptors of textural features were obtained. The dimension of the space of features was reduced by the use of three selection methods: Fisher coefficient (FC), mutual information (MI), minimization of the classification error probability and average correlation coefficients between the chosen features minimization of classification error probability (POE) and average correlation coefficients (ACC). Each of them returned ten features occupying the initial place in the ranking devised according to its own coefficient. As a result of the Fisher coefficient and mutual information selections, the same features arranged in a different order were obtained. In both rankings, the 50% percentile (Perc.50%) was found in the first place. The next selected features come from the co-occurrence matrix. The sets of features selected in the selection process were evaluated using six classification tree methods. These were: decision stump (DS), Hoeffding tree (HT), logistic model trees (LMT), random forest (RF), random tree (RT) and reduced error pruning tree (REPT). In order to assess the accuracy of classifiers, the following parameters were used: overall classification accuracy (ACC), true positive rate (TPR, classification sensitivity), true negative rate (TNR, classification specificity), positive predictive value (PPV) and negative predictive value (NPV). Taking into account the classification results, it should be stated that the best results were obtained for the Hoeffding tree and logistic model trees classifiers, using the set of features selected by the POE + ACC method. In the case of the Hoeffding tree classifier, the highest values of three parameters were obtained: ACC = 90%, TPR = 93.3% and PPV = 93.3%. Additionally, the values of the other two parameters, i.e., TNR = 86.7% and NPV = 86.6% were close to the maximum values obtained for the LMT classifier. In the case of logistic model trees classifier, the same ACC value was obtained ACC=90% and the highest values for TNR=88.3% and NPV= 88.3%. The values of the other two parameters remained at a level close to the highest TPR = 91.7% and PPV = 91.6%. The results obtained in the experiment show that the use of classification trees is an effective method of classification of texture features. This allows identifying the conditions of the spongy tissue for healthy cases and those with the porosis.

Keywords: classification, feature selection, texture analysis, tree algorithms

Procedia PDF Downloads 178
22043 Optimization of Process Parameters for Copper Extraction from Wastewater Treatment Sludge by Sulfuric Acid

Authors: Usarat Thawornchaisit, Kamalasiri Juthaisong, Kasama Parsongjeen, Phonsiri Phoengchan

Abstract:

In this study, sludge samples that were collected from the wastewater treatment plant of a printed circuit board manufacturing industry in Thailand were subjected to acid extraction using sulfuric acid as the chemical extracting agent. The effects of sulfuric acid concentration (A), the ratio of a volume of acid to a quantity of sludge (B) and extraction time (C) on the efficiency of copper extraction were investigated with the aim of finding the optimal conditions for maximum removal of copper from the wastewater treatment sludge. Factorial experimental design was employed to model the copper extraction process. The results were analyzed statistically using analysis of variance to identify the process variables that were significantly affected the copper extraction efficiency. Results showed that all linear terms and an interaction term between volume of acid to quantity of sludge ratio and extraction time (BC), had statistically significant influence on the efficiency of copper extraction under tested conditions in which the most significant effect was ascribed to volume of acid to quantity of sludge ratio (B), followed by sulfuric acid concentration (A), extraction time (C) and interaction term of BC, respectively. The remaining two-way interaction terms, (AB, AC) and the three-way interaction term (ABC) is not statistically significant at the significance level of 0.05. The model equation was derived for the copper extraction process and the optimization of the process was performed using a multiple response method called desirability (D) function to optimize the extraction parameters by targeting maximum removal. The optimum extraction conditions of 99% of copper were found to be sulfuric acid concentration: 0.9 M, ratio of the volume of acid (mL) to the quantity of sludge (g) at 100:1 with an extraction time of 80 min. Experiments under the optimized conditions have been carried out to validate the accuracy of the Model.

Keywords: acid treatment, chemical extraction, sludge, waste management

Procedia PDF Downloads 198
22042 Analysis of Nonlinear and Non-Stationary Signal to Extract the Features Using Hilbert Huang Transform

Authors: A. N. Paithane, D. S. Bormane, S. D. Shirbahadurkar

Abstract:

It has been seen that emotion recognition is an important research topic in the field of Human and computer interface. A novel technique for Feature Extraction (FE) has been presented here, further a new method has been used for human emotion recognition which is based on HHT method. This method is feasible for analyzing the nonlinear and non-stationary signals. Each signal has been decomposed into the IMF using the EMD. These functions are used to extract the features using fission and fusion process. The decomposition technique which we adopt is a new technique for adaptively decomposing signals. In this perspective, we have reported here potential usefulness of EMD based techniques.We evaluated the algorithm on Augsburg University Database; the manually annotated database.

Keywords: intrinsic mode function (IMF), Hilbert-Huang transform (HHT), empirical mode decomposition (EMD), emotion detection, electrocardiogram (ECG)

Procedia PDF Downloads 580
22041 Determinaton of Processing Parameters of Decaffeinated Black Tea by Using Pilot-Scale Supercritical CO₂ Extraction

Authors: Saziye Ilgaz, Atilla Polat

Abstract:

There is a need for development of new processing techniques to ensure safety and quality of final product while minimizing the adverse impact of extraction solvents on environment and residue levels of these solvents in final product, decaffeinated black tea. In this study pilot scale supercritical carbon dioxide (SCCO₂) extraction was used to produce decaffeinated black tea in place of solvent extraction. Pressure (250, 375, 500 bar), extraction time (60, 180, 300 min), temperature (55, 62.5, 70 °C), CO₂ flow rate (1, 2 ,3 LPM) and co-solvent quantity (0, 2.5, 5 %mol) were selected as extraction parameters. The five factors BoxBehnken experimental design with three center points was performed to generate 46 different processing conditions for caffeine removal from black tea samples. As a result of these 46 experiments caffeine content of black tea samples were reduced from 2.16 % to 0 – 1.81 %. The experiments showed that extraction time, pressure, CO₂ flow rate and co-solvent quantity had great impact on decaffeination yield. Response surface methodology (RSM) was used to optimize the parameters of the supercritical carbon dioxide extraction. Optimum extraction parameters obtained of decaffeinated black tea were as follows: extraction temperature of 62,5 °C, extraction pressure of 375 bar, CO₂ flow rate of 3 LPM, extraction time of 176.5 min and co-solvent quantity of 5 %mol.

Keywords: supercritical carbon dioxide, decaffeination, black tea, extraction

Procedia PDF Downloads 364
22040 Computer Aided Diagnostic System for Detection and Classification of a Brain Tumor through MRI Using Level Set Based Segmentation Technique and ANN Classifier

Authors: Atanu K Samanta, Asim Ali Khan

Abstract:

Due to the acquisition of huge amounts of brain tumor magnetic resonance images (MRI) in clinics, it is very difficult for radiologists to manually interpret and segment these images within a reasonable span of time. Computer-aided diagnosis (CAD) systems can enhance the diagnostic capabilities of radiologists and reduce the time required for accurate diagnosis. An intelligent computer-aided technique for automatic detection of a brain tumor through MRI is presented in this paper. The technique uses the following computational methods; the Level Set for segmentation of a brain tumor from other brain parts, extraction of features from this segmented tumor portion using gray level co-occurrence Matrix (GLCM), and the Artificial Neural Network (ANN) to classify brain tumor images according to their respective types. The entire work is carried out on 50 images having five types of brain tumor. The overall classification accuracy using this method is found to be 98% which is significantly good.

Keywords: brain tumor, computer-aided diagnostic (CAD) system, gray-level co-occurrence matrix (GLCM), tumor segmentation, level set method

Procedia PDF Downloads 512
22039 Segmentation of Korean Words on Korean Road Signs

Authors: Lae-Jeong Park, Kyusoo Chung, Jungho Moon

Abstract:

This paper introduces an effective method of segmenting Korean text (place names in Korean) from a Korean road sign image. A Korean advanced directional road sign is composed of several types of visual information such as arrows, place names in Korean and English, and route numbers. Automatic classification of the visual information and extraction of Korean place names from the road sign images make it possible to avoid a lot of manual inputs to a database system for management of road signs nationwide. We propose a series of problem-specific heuristics that correctly segments Korean place names, which is the most crucial information, from the other information by leaving out non-text information effectively. The experimental results with a dataset of 368 road sign images show 96% of the detection rate per Korean place name and 84% per road sign image.

Keywords: segmentation, road signs, characters, classification

Procedia PDF Downloads 444
22038 Solvent Extraction of Rb and Cs from Jarosite Slag Using t-BAMBP

Authors: Zhang Haiyan, Su Zujun, Zhao Fengqi

Abstract:

Lepidolite after extraction of Lithium by sulfate produced many jarosite slag which contains a lot of Rb and Cs.The separation and recovery of Rubidium(Rb) and Cesium(Cs) can make full of use of Lithium mica. XRF analysis showed that the slag mainly including K Rb Cs Al and etc. Fractional solvent extraction tests were carried out; the results show that using20% t-BAMBP plus 80% sulfonated kerosene, the separation of Rb and Cs can be achieved by adjusting the alkalinity. Extraction is the order of Cs Rb, ratio of Cs to Rb and ratio of Rb to K can reach above 1500 and 2500 respectively.

Keywords: cesium, jarosite slag, rubidium, solvent extraction, t-BAMBP

Procedia PDF Downloads 587
22037 Understanding Cognitive Fatigue From FMRI Scans With Self-supervised Learning

Authors: Ashish Jaiswal, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Fillia Makedon, Glenn Wylie

Abstract:

Functional magnetic resonance imaging (fMRI) is a neuroimaging technique that records neural activations in the brain by capturing the blood oxygen level in different regions based on the task performed by a subject. Given fMRI data, the problem of predicting the state of cognitive fatigue in a person has not been investigated to its full extent. This paper proposes tackling this issue as a multi-class classification problem by dividing the state of cognitive fatigue into six different levels, ranging from no-fatigue to extreme fatigue conditions. We built a spatio-temporal model that uses convolutional neural networks (CNN) for spatial feature extraction and a long short-term memory (LSTM) network for temporal modeling of 4D fMRI scans. We also applied a self-supervised method called MoCo (Momentum Contrast) to pre-train our model on a public dataset BOLD5000 and fine-tuned it on our labeled dataset to predict cognitive fatigue. Our novel dataset contains fMRI scans from Traumatic Brain Injury (TBI) patients and healthy controls (HCs) while performing a series of N-back cognitive tasks. This method establishes a state-of-the-art technique to analyze cognitive fatigue from fMRI data and beats previous approaches to solve this problem.

Keywords: fMRI, brain imaging, deep learning, self-supervised learning, contrastive learning, cognitive fatigue

Procedia PDF Downloads 189
22036 A New Method of Extracting Polyphenols from Honey Using a Biosorbent Compared to the Commercial Resin Amberlite XAD2

Authors: Farid Benkaci-Alia, Abdelhamid Neggada, Sophie Laurentb

Abstract:

A new extraction method of polyphenols from honey using a biodegradable resin was developed and compared with the common commercial resin amberlite XAD2. For this purpose, three honey samples of Algerian origin were selected for the different physico-chemical and biochemical parameters study. After extraction of the target compounds by both resins, the polyphenol content was determined, the antioxidant activity was tested, and LC-MS analyses were performed for identification and quantification. The results showed that physico-chemical and biochemical parameters meet the norms of the International Honey commission, and the H1 sample seemed to be of high quality. The optimal conditions of extraction by biodegradable resin were a pH of 3, an adsorption dose of 40 g/L, a contact time of 50 min, an extraction temperature of 60°C and no stirring. The regeneration and reuse number of both resins was three cycles. The polyphenol contents demonstrated a higher extraction efficiency of biosorbent than of XAD2, especially in H1. LC-MS analyses allowed for the identification and quantification of fifteen compounds in the different honey samples extracted using both resins and the most abundant compound was 3,4,5-trimethoxybenzoic acid. In addition, the biosorbent extracts showed stronger antioxidant activities than the XAD2 extracts.

Keywords: extraction, polyphénols, biosorbent, resin amberlite, HPLC-MS

Procedia PDF Downloads 105
22035 Removal Cobalt (II) and Copper (II) by Solvent Extraction from Sulfate Solutions by Capric Acid in Chloroform

Authors: A. Bara, D. Barkat

Abstract:

Liquid-liquid extraction is one of the most useful techniques for selective removal and recovery of metal ions from aqueous solutions, applied in purification processes in numerous chemical and metallurgical industries. In this work, The liquid-liquid extraction of cobalt (II) and copper (II) from aqueous solution by capric acid (HL) in chloroform at 25°C has been studied. Our interest in this paper is to study the effect of concentration of capric acid on the extraction of Co(II) and Cu(II) to see the complexes could be formed in the organic phase using various concentration of capric acid. The extraction of cobalt (II) and copper (II) is extracted as the complex CoL2 (HL )2, CuL2 (HL)2.

Keywords: capric acid, Cobalt(II), copper(II), liquid-liquid extraction

Procedia PDF Downloads 441
22034 Microwave and Ultrasound Assisted Extraction of Pectin from Mandarin and Lemon Peel: Comparisons between Sources and Methods

Authors: Pınar Karbuz, A. Seyhun Kıpcak, Mehmet B. Piskin, Emek Derun, Nurcan Tugrul

Abstract:

Pectin is a complex colloidal polysaccharide, found on the cell walls of all young plants such as fruit and vegetables. It acts as a thickening, stabilizing and gelling agent in foods. Pectin was extracted from mandarin and lemon peels using ultrasound and microwave assisted extraction methods to compare with these two different sources and methods of pectin production. In this work, the effect of microwave power (360, 600 W) and irradiation time (1, 2, 3 min) on the yield of extracted pectin from mandarin and lemon peels for microwave assisted extraction (MAE) were investigated. For ultrasound assisted extraction (UAE), parameters were determined as temperature (60, 75 °C) and sonication time (15, 30, 45 min) and hydrochloric acid (HCl) was used as an extracting agent for both extraction methods. The highest yields of extracted pectin from lemon peels were found to be 8.16 % (w/w) for 75 °C, 45 min by UAE and 8.58 % (w/w) for 360 W, 1 min by MAE. Additionally, the highest yields of extracted pectin from mandarin peels were found to be 11.29 % (w/w) for 75 °C, 45 min by UAE and 16.44 % (w/w) for 600 W, 1 min by MAE. The results showed that the use of microwave assisted extraction promoted a better yield when compared to the two extraction methods. On the other hand, according to the results of experiments, mandarin peels contain more pectin than lemon peels when the compared to the pectin product values of two sources. Therefore, these results suggested that MAE could be used as an efficient and rapid method for extraction of pectin and mandarin peels should be preferred as sources of pectin production compared to lemon peels.

Keywords: mandarin peel, lemon peel, pectin, ultrasound, microwave, extraction

Procedia PDF Downloads 234
22033 A Nucleic Acid Extraction Method for High-Viscosity Floricultural Samples

Authors: Harunori Kawabe, Hideyuki Aoshima, Koji Murakami, Minoru Kawakami, Yuka Nakano, David D. Ordinario, C. W. Crawford, Iri Sato-Baran

Abstract:

With the recent advances in gene editing technologies allowing the rewriting of genetic sequences, additional market growth in the global floriculture market beyond previous trends is anticipated through increasingly sophisticated plant breeding techniques. As a prerequisite for gene editing, the gene sequence of the target plant must first be identified. This necessitates the genetic analysis of plants with unknown gene sequences, the extraction of RNA, and comprehensive expression analysis. Consequently, a technology capable of consistently and effectively extracting high-purity DNA and RNA from plants is of paramount importance. Although model plants, such as Arabidopsis and tobacco, have established methods for DNA and RNA extraction, floricultural species such as roses present unique challenges. Different techniques to extract DNA and RNA from various floricultural species were investigated. Upon sampling and grinding the petals of several floricultural species, it was observed that nucleic acid extraction from the ground petal solutions of low viscosity was straightforward; solutions of high viscosity presented a significant challenge. It is postulated that the presence of substantial quantities of polysaccharides and polyphenols in the plant tissue was responsible for the inhibition of nucleic acid extraction. Consequently, attempts were made to extract high-purity DNA and RNA by improving the CTAB method and combining it with commercially available nucleic acid extraction kits. The quality of the total extracted DNA and RNA was evaluated using standard methods. Finally, the effectiveness of the extraction method was assessed by determining whether it was possible to create a library that could be applied as a suitable template for a next-generation sequencer. In conclusion, a method was developed for consistent and accurate nucleic acid extraction from high-viscosity floricultural samples. These results demonstrate improved techniques for DNA and RNA extraction from flowers, help facilitate gene editing of floricultural species and expand the boundaries of research and commercial opportunities.

Keywords: floriculture, gene editing, next-generation sequencing, nucleic acid extraction

Procedia PDF Downloads 29
22032 Active Features Determination: A Unified Framework

Authors: Meenal Badki

Abstract:

We address the issue of active feature determination, where the objective is to determine the set of examples on which additional data (such as lab tests) needs to be gathered, given a large number of examples with some features (such as demographics) and some examples with all the features (such as the complete Electronic Health Record). We note that certain features may be more costly, unique, or laborious to gather. Our proposal is a general active learning approach that is independent of classifiers and similarity metrics. It allows us to identify examples that differ from the full data set and obtain all the features for the examples that match. Our comprehensive evaluation shows the efficacy of this approach, which is driven by four authentic clinical tasks.

Keywords: feature determination, classification, active learning, sample-efficiency

Procedia PDF Downloads 75
22031 Liver Tumor Detection by Classification through FD Enhancement of CT Image

Authors: N. Ghatwary, A. Ahmed, H. Jalab

Abstract:

In this paper, an approach for the liver tumor detection in computed tomography (CT) images is represented. The detection process is based on classifying the features of target liver cell to either tumor or non-tumor. Fractional differential (FD) is applied for enhancement of Liver CT images, with the aim of enhancing texture and edge features. Later on, a fusion method is applied to merge between the various enhanced images and produce a variety of feature improvement, which will increase the accuracy of classification. Each image is divided into NxN non-overlapping blocks, to extract the desired features. Support vector machines (SVM) classifier is trained later on a supplied dataset different from the tested one. Finally, the block cells are identified whether they are classified as tumor or not. Our approach is validated on a group of patients’ CT liver tumor datasets. The experiment results demonstrated the efficiency of detection in the proposed technique.

Keywords: fractional differential (FD), computed tomography (CT), fusion, aplha, texture features.

Procedia PDF Downloads 358
22030 Transfer Learning for Protein Structure Classification at Low Resolution

Authors: Alexander Hudson, Shaogang Gong

Abstract:

Structure determination is key to understanding protein function at a molecular level. Whilst significant advances have been made in predicting structure and function from amino acid sequence, researchers must still rely on expensive, time-consuming analytical methods to visualise detailed protein conformation. In this study, we demonstrate that it is possible to make accurate (≥80%) predictions of protein class and architecture from structures determined at low (>3A) resolution, using a deep convolutional neural network trained on high-resolution (≤3A) structures represented as 2D matrices. Thus, we provide proof of concept for high-speed, low-cost protein structure classification at low resolution, and a basis for extension to prediction of function. We investigate the impact of the input representation on classification performance, showing that side-chain information may not be necessary for fine-grained structure predictions. Finally, we confirm that high resolution, low-resolution and NMR-determined structures inhabit a common feature space, and thus provide a theoretical foundation for boosting with single-image super-resolution.

Keywords: transfer learning, protein distance maps, protein structure classification, neural networks

Procedia PDF Downloads 136
22029 Analysis Of Non-uniform Characteristics Of Small Underwater Targets Based On Clustering

Authors: Tianyang Xu

Abstract:

Small underwater targets generally have a non-centrosymmetric geometry, and the acoustic scattering field of the target has spatial inhomogeneity under active sonar detection conditions. In view of the above problems, this paper takes the hemispherical cylindrical shell as the research object, and considers the angle continuity implied in the echo characteristics, and proposes a cluster-driven research method for the non-uniform characteristics of target echo angle. First, the target echo features are extracted, and feature vectors are constructed. Secondly, the t-SNE algorithm is used to improve the internal connection of the feature vector in the low-dimensional feature space and to construct the visual feature space. Finally, the implicit angular relationship between echo features is extracted under unsupervised condition by cluster analysis. The reconstruction results of the local geometric structure of the target corresponding to different categories show that the method can effectively divide the angle interval of the local structure of the target according to the natural acoustic scattering characteristics of the target.

Keywords: underwater target;, non-uniform characteristics;, cluster-driven method;, acoustic scattering characteristics

Procedia PDF Downloads 132
22028 A Study on the Application of Machine Learning and Deep Learning Techniques for Skin Cancer Detection

Authors: Hritwik Ghosh, Irfan Sadiq Rahat, Sachi Nandan Mohanty, J. V. R. Ravindra

Abstract:

In the rapidly evolving landscape of medical diagnostics, the early detection and accurate classification of skin cancer remain paramount for effective treatment outcomes. This research delves into the transformative potential of Artificial Intelligence (AI), specifically Deep Learning (DL), as a tool for discerning and categorizing various skin conditions. Utilizing a diverse dataset of 3,000 images representing nine distinct skin conditions, we confront the inherent challenge of class imbalance. This imbalance, where conditions like melanomas are over-represented, is addressed by incorporating class weights during the model training phase, ensuring an equitable representation of all conditions in the learning process. Our pioneering approach introduces a hybrid model, amalgamating the strengths of two renowned Convolutional Neural Networks (CNNs), VGG16 and ResNet50. These networks, pre-trained on the ImageNet dataset, are adept at extracting intricate features from images. By synergizing these models, our research aims to capture a holistic set of features, thereby bolstering classification performance. Preliminary findings underscore the hybrid model's superiority over individual models, showcasing its prowess in feature extraction and classification. Moreover, the research emphasizes the significance of rigorous data pre-processing, including image resizing, color normalization, and segmentation, in ensuring data quality and model reliability. In essence, this study illuminates the promising role of AI and DL in revolutionizing skin cancer diagnostics, offering insights into its potential applications in broader medical domains.

Keywords: artificial intelligence, machine learning, deep learning, skin cancer, dermatology, convolutional neural networks, image classification, computer vision, healthcare technology, cancer detection, medical imaging

Procedia PDF Downloads 86
22027 Stereotypical Motor Movement Recognition Using Microsoft Kinect with Artificial Neural Network

Authors: M. Jazouli, S. Elhoufi, A. Majda, A. Zarghili, R. Aalouane

Abstract:

Autism spectrum disorder is a complex developmental disability. It is defined by a certain set of behaviors. Persons with Autism Spectrum Disorders (ASD) frequently engage in stereotyped and repetitive motor movements. The objective of this article is to propose a method to automatically detect this unusual behavior. Our study provides a clinical tool which facilitates for doctors the diagnosis of ASD. We focus on automatic identification of five repetitive gestures among autistic children in real time: body rocking, hand flapping, fingers flapping, hand on the face and hands behind back. In this paper, we present a gesture recognition system for children with autism, which consists of three modules: model-based movement tracking, feature extraction, and gesture recognition using artificial neural network (ANN). The first one uses the Microsoft Kinect sensor, the second one chooses points of interest from the 3D skeleton to characterize the gestures, and the last one proposes a neural connectionist model to perform the supervised classification of data. The experimental results show that our system can achieve above 93.3% recognition rate.

Keywords: ASD, artificial neural network, kinect, stereotypical motor movements

Procedia PDF Downloads 306