Search results for: routing techniques
5551 Fine Characterization of Glucose Modified Human Serum Albumin by Different Biophysical and Biochemical Techniques at a Range
Authors: Neelofar, Khursheed Alam, Jamal Ahmad
Abstract:
Protein modification in diabetes mellitus may lead to early glycation products (EGPs) or amadori product as well as advanced glycation end products (AGEs). Early glycation involves the reaction of glucose with N-terminal and lysyl side chain amino groups to form Schiff’s base which undergoes rearrangements to form more stable early glycation product known as Amadori product. After Amadori, the reactions become more complicated leading to the formation of advanced glycation end products (AGEs) that interact with various AGE receptors, thereby playing an important role in the long-term complications of diabetes. Millard reaction or nonenzymatic glycation reaction accelerate in diabetes due to hyperglycation and alter serum protein’s structure, their normal functions that lead micro and macro vascular complications in diabetic patients. In this study, Human Serum Albumin (HSA) with a constant concentration was incubated with different concentrations of glucose at 370C for a week. At 4th day, Amadori product was formed that was confirmed by colorimetric method NBT assay and TBA assay which both are authenticate early glycation product. Conformational changes in native as well as all samples of Amadori albumin with different concentrations of glucose were investigated by various biophysical and biochemical techniques. Main biophysical techniques hyperchromacity, quenching of fluorescence intensity, FTIR, CD and SDS-PAGE were used. Further conformational changes were observed by biochemical assays mainly HMF formation, fructoseamine, reduction of fructoseamine with NaBH4, carbonyl content estimation, lysine and arginine residues estimation, ANS binding property and thiol group estimation. This study find structural and biochemical changes in Amadori modified HSA with normal to hyperchronic range of glucose with respect to native HSA. When glucose concentration was increased from normal to chronic range biochemical and structural changes also increased. Highest alteration in secondary and tertiary structure and conformation in glycated HSA was observed at the hyperchronic concentration (75mM) of glucose. Although it has been found that Amadori modified proteins is also involved in secondary complications of diabetes as AGEs but very few studies have been done to analyze the conformational changes in Amadori modified proteins due to early glycation. Most of the studies were found on the structural changes in Amadori protein at a particular glucose concentration but no study was found to compare the biophysical and biochemical changes in HSA due to early glycation with a range of glucose concentration at a constant incubation time. So this study provide the information about the biochemical and biophysical changes occur in Amadori modified albumin at a range of glucose normal to chronic in diabetes. Although many implicates currently in use i.e. glycaemic control, insulin treatment and other chemical therapies that can control many aspects of diabetes. However, even with intensive use of current antidiabetic agents more than 50 % of diabetic patient’s type 2 suffers poor glycaemic control and 18 % develop serious complications within six years of diagnosis. Experimental evidence related to diabetes suggests that preventing the nonenzymatic glycation of relevant proteins or blocking their biological effects might beneficially influence the evolution of vascular complications in diabetic patients or quantization of amadori adduct of HSA by authentic antibodies against HSA-EGPs can be used as marker for early detection of the initiation/progression of secondary complications of diabetes. So this research work may be helpful for the same.Keywords: diabetes mellitus, glycation, albumin, amadori, biophysical and biochemical techniques
Procedia PDF Downloads 2725550 Comparison of Deep Learning and Machine Learning Algorithms to Diagnose and Predict Breast Cancer
Authors: F. Ghazalnaz Sharifonnasabi, Iman Makhdoom
Abstract:
Breast cancer is a serious health concern that affects many people around the world. According to a study published in the Breast journal, the global burden of breast cancer is expected to increase significantly over the next few decades. The number of deaths from breast cancer has been increasing over the years, but the age-standardized mortality rate has decreased in some countries. It’s important to be aware of the risk factors for breast cancer and to get regular check- ups to catch it early if it does occur. Machin learning techniques have been used to aid in the early detection and diagnosis of breast cancer. These techniques, that have been shown to be effective in predicting and diagnosing the disease, have become a research hotspot. In this study, we consider two deep learning approaches including: Multi-Layer Perceptron (MLP), and Convolutional Neural Network (CNN). We also considered the five-machine learning algorithm titled: Decision Tree (C4.5), Naïve Bayesian (NB), Support Vector Machine (SVM), K-Nearest Neighbors (KNN) Algorithm and XGBoost (eXtreme Gradient Boosting) on the Breast Cancer Wisconsin Diagnostic dataset. We have carried out the process of evaluating and comparing classifiers involving selecting appropriate metrics to evaluate classifier performance and selecting an appropriate tool to quantify this performance. The main purpose of the study is predicting and diagnosis breast cancer, applying the mentioned algorithms and also discovering of the most effective with respect to confusion matrix, accuracy and precision. It is realized that CNN outperformed all other classifiers and achieved the highest accuracy (0.982456). The work is implemented in the Anaconda environment based on Python programing language.Keywords: breast cancer, multi-layer perceptron, Naïve Bayesian, SVM, decision tree, convolutional neural network, XGBoost, KNN
Procedia PDF Downloads 755549 Thick Data Techniques for Identifying Abnormality in Video Frames for Wireless Capsule Endoscopy
Authors: Jinan Fiaidhi, Sabah Mohammed, Petros Zezos
Abstract:
Capsule endoscopy (CE) is an established noninvasive diagnostic modality in investigating small bowel disease. CE has a pivotal role in assessing patients with suspected bleeding or identifying evidence of active Crohn's disease in the small bowel. However, CE produces lengthy videos with at least eighty thousand frames, with a frequency rate of 2 frames per second. Gastroenterologists cannot dedicate 8 to 15 hours to reading the CE video frames to arrive at a diagnosis. This is why the issue of analyzing CE videos based on modern artificial intelligence techniques becomes a necessity. However, machine learning, including deep learning, has failed to report robust results because of the lack of large samples to train its neural nets. In this paper, we are describing a thick data approach that learns from a few anchor images. We are using sound datasets like KVASIR and CrohnIPI to filter candidate frames that include interesting anomalies in any CE video. We are identifying candidate frames based on feature extraction to provide representative measures of the anomaly, like the size of the anomaly and the color contrast compared to the image background, and later feed these features to a decision tree that can classify the candidate frames as having a condition like the Crohn's Disease. Our thick data approach reported accuracy of detecting Crohn's Disease based on the availability of ulcer areas at the candidate frames for KVASIR was 89.9% and for the CrohnIPI was 83.3%. We are continuing our research to fine-tune our approach by adding more thick data methods for enhancing diagnosis accuracy.Keywords: thick data analytics, capsule endoscopy, Crohn’s disease, siamese neural network, decision tree
Procedia PDF Downloads 1565548 Ankh Key Broadband Array Antenna for 5G Applications
Authors: Noha M. Rashad, W. Swelam, M. H. Abd ElAzeem
Abstract:
A simple design of array antenna is presented in this paper, supporting millimeter wave applications which can be used in short range wireless communications such as 5G applications. This design enhances the use of V-band, according to IEEE standards, as the antenna works in the 70 GHz band with bandwidth more than 11 GHz and peak gain more than 13 dBi. The design is simulated using different numerical techniques achieving a very good agreement.Keywords: 5G technology, array antenna, microstrip, millimeter wave
Procedia PDF Downloads 3065547 Eco-Friendly Textiles: The Power of Natural Dyes
Authors: Bushra
Abstract:
This paper explores the historical significance, ecological benefits, and contemporary applications of natural dyes in textile dyeing, aiming to provide a comprehensive overview of their potential to contribute to a sustainable fashion industry while minimizing ecological footprints. This research explores the potential of natural dyes as a sustainable alternative to synthetic dyes in the textile industry, examining their historical context, sources, and environmental benefits. Natural dyes come from plants, animals, and minerals, including roots, leaves, bark, fruits, flowers, insects, and metal salts, used as mordants to fix dyes to fabrics. Natural dyeing involves extraction, mordanting, and dyeing techniques. Optimizing these processes can enhance the performance of natural dyes, making them viable for contemporary textile applications based on experimental research. Natural dyes offer eco-friendly benefits like biodegradability, non-toxicity, and resource renewables, reducing pollution, promoting biodiversity, and reducing reliance on petrochemicals. Natural dyes offer benefits but face challenges in color consistency, scalability, and performance, requiring industrial production to meet modern consumer standards for durability and colorfastness. Contemporary initiatives in the textile industry include fashion brands like Eileen Fisher and Patagonia incorporating natural dyes, artisans like India Flint's Botanical Alchemy promoting traditional dyeing techniques, and research projects like the European Union's Horizon 2020 program. Natural dyes offer a sustainable textile industry solution, reducing environmental impact and promoting harmony with nature. Research and innovation are paving the way for widespread adoption, transforming textile dyeing.Keywords: historical significance, textile industry, natural dyes, sustainability
Procedia PDF Downloads 485546 Low-Voltage and Low-Power Bulk-Driven Continuous-Time Current-Mode Differentiator Filters
Authors: Ravi Kiran Jaladi, Ezz I. El-Masry
Abstract:
Emerging technologies such as ultra-wide band wireless access technology that operate at ultra-low power present several challenges due to their inherent design that limits the use of voltage-mode filters. Therefore, Continuous-time current-mode (CTCM) filters have become very popular in recent times due to the fact they have a wider dynamic range, improved linearity, and extended bandwidth compared to their voltage-mode counterparts. The goal of this research is to develop analog filters which are suitable for the current scaling CMOS technologies. Bulk-driven MOSFET is one of the most popular low power design technique for the existing challenges, while other techniques have obvious shortcomings. In this work, a CTCM Gate-driven (GD) differentiator has been presented with a frequency range from dc to 100MHz which operates at very low supply voltage of 0.7 volts. A novel CTCM Bulk-driven (BD) differentiator has been designed for the first time which reduces the power consumption multiple times that of GD differentiator. These GD and BD differentiator has been simulated using CADENCE TSMC 65nm technology for all the bilinear and biquadratic band-pass frequency responses. These basic building blocks can be used to implement the higher order filters. A 6th order cascade CTCM Chebyshev band-pass filter has been designed using the GD and BD techniques. As a conclusion, a low power GD and BD 6th order chebyshev stagger-tuned band-pass filter was simulated and all the parameters obtained from all the resulting realizations are analyzed and compared. Monte Carlo analysis is performed for both the 6th order filters and the results of sensitivity analysis are presented.Keywords: bulk-driven (BD), continuous-time current-mode filters (CTCM), gate-driven (GD)
Procedia PDF Downloads 2605545 Non-Invasive Techniques of Analysis of Painting in Forensic Fields
Authors: Radka Sefcu, Vaclava Antuskova, Ivana Turkova
Abstract:
A growing market with modern artworks of a high price leads to the creation and selling of artwork counterfeits. Material analysis is an important part of the process of assessment of authenticity. Knowledge of materials and techniques used by original authors is also necessary. The contribution presents possibilities of non-invasive methods of structural analysis in research on paintings. It was proved that unambiguous identification of many art materials is feasible without sampling. The combination of Raman spectroscopy with FTIR-external reflection enabled the identification of pigments and binders on selected artworks of prominent Czech painters from the first half of the 20th century – Josef Čapek, Emil Filla, Václav Špála and Jan Zrzavý. Raman spectroscopy confirmed the presence of a wide range of white pigments - lead white, zinc white, titanium white, barium white and also Freeman's white as a special white pigment of painting. Good results were obtained for red, blue and most of the yellow areas. Identification of green pigments was often impossible due to strong fluorescence. Oil was confirmed as a binding medium on most of the analyzed artworks via FTIR - external reflection. Collected data present the valuable background for the determination of art materials characteristic for each painter (his palette) and its development over time. Obtained results will further serve as comparative material for the authentication of artworks. This work has been financially supported by the project of the Ministry of the Interior of the Czech Republic: The Development of a Strategic Cluster for Effective Instrumental Technological Methods of Forensic Authentication of Modern Artworks (VJ01010004).Keywords: non-invasive analysis, Raman spectroscopy, FTIR-external reflection, forgeries
Procedia PDF Downloads 1725544 Comprehensive Risk Analysis of Decommissioning Activities with Multifaceted Hazard Factors
Authors: Hyeon-Kyo Lim, Hyunjung Kim, Kune-Woo Lee
Abstract:
Decommissioning process of nuclear facilities can be said to consist of a sequence of problem solving activities, partly because there may exist working environments contaminated by radiological exposure, and partly because there may also exist industrial hazards such as fire, explosions, toxic materials, and electrical and physical hazards. As for an individual hazard factor, risk assessment techniques are getting known to industrial workers with advance of safety technology, but the way how to integrate those results is not. Furthermore, there are few workers who experienced decommissioning operations a lot in the past. Therefore, not a few countries in the world have been trying to develop appropriate counter techniques in order to guarantee safety and efficiency of the process. In spite of that, there still exists neither domestic nor international standard since nuclear facilities are too diverse and unique. In the consequence, it is quite inevitable to imagine and assess the whole risk in the situation anticipated one by one. This paper aimed to find out an appropriate technique to integrate individual risk assessment results from the viewpoint of experts. Thus, on one hand the whole risk assessment activity for decommissioning operations was modeled as a sequence of individual risk assessment steps, and on the other, a hierarchical risk structure was developed. Then, risk assessment procedure that can elicit individual hazard factors one by one were introduced with reference to the standard operation procedure (SOP) and hierarchical task analysis (HTA). With an assumption of quantification and normalization of individual risks, a technique to estimate relative weight factors was tried by using the conventional Analytic Hierarchical Process (AHP) and its result was reviewed with reference to judgment of experts. Besides, taking the ambiguity of human judgment into consideration, debates based upon fuzzy inference was added with a mathematical case study.Keywords: decommissioning, risk assessment, analytic hierarchical process (AHP), fuzzy inference
Procedia PDF Downloads 4245543 Advanced Structural Analysis of Energy Storage Materials
Authors: Disha Gupta
Abstract:
The aim of this research is to conduct X-ray and e-beam characterization techniques on lithium-ion battery materials for the improvement of battery performance. The key characterization techniques employed are the synchrotron X-ray Absorption Spectroscopy (XAS) combined with X-ray diffraction (XRD), scanning electron microscopy (SEM) and transmission electron microscopy (TEM) to obtain a more holistic approach to understanding material properties. This research effort provides additional battery characterization knowledge that promotes the development of new cathodes, anodes, electrolyte and separator materials for batteries, hence, leading to better and more efficient battery performance. Both ex-situ and in-situ synchrotron experiments were performed on LiFePO₄, one of the most common cathode material, from different commercial sources and their structural analysis, were conducted using Athena/Artemis software. This analysis technique was then further extended to study other cathode materials like LiMnxFe(₁₋ₓ)PO₄ and even some sulphate systems like Li₂Mn(SO₄)₂ and Li₂Co0.5Mn₀.₅ (SO₄)₂. XAS data were collected for Fe and P K-edge for LiFePO4, and Fe, Mn and P-K-edge for LiMnxFe(₁₋ₓ)PO₄ to conduct an exhaustive study of the structure. For the sulphate system, Li₂Mn(SO₄)₂, XAS data was collected at both Mn and S K-edge. Finite Difference Method for Near Edge Structure (FDMNES) simulations were also conducted for various iron, manganese and phosphate model compounds and compared with the experimental XANES data to understand mainly the pre-edge structural information of the absorbing atoms. The Fe K-edge XAS results showed a charge compensation occurring on the Fe atom for all the differently synthesized LiFePO₄ materials as well as the LiMnxFe(₁₋ₓ)PO₄ systems. However, the Mn K-edge showed a difference in results as the Mn concentration changed in the materials. For the sulphate-based system Li₂Mn(SO₄)₂, however, no change in the Mn K-edge was observed, even though electrochemical studies showed Mn redox reactions.Keywords: li-ion batteries, electrochemistry, X-ray absorption spectroscopy, XRD
Procedia PDF Downloads 1505542 Identification of Groundwater Potential Zones Using Geographic Information System and Multi-Criteria Decision Analysis: A Case Study in Bagmati River Basin
Authors: Hritik Bhattarai, Vivek Dumre, Ananya Neupane, Poonam Koirala, Anjali Singh
Abstract:
The availability of clean and reliable groundwater is essential for the sustainment of human and environmental health. Groundwater is a crucial resource that contributes significantly to the total annual supply. However, over-exploitation has depleted groundwater availability considerably and led to some land subsidence. Determining the potential zone of groundwater is vital for protecting water quality and managing groundwater systems. Groundwater potential zones are marked with the assistance of Geographic Information System techniques. During the study, a standard methodology was proposed to determine groundwater potential using an integration of GIS and AHP techniques. When choosing the prospective groundwater zone, accurate information was generated to get parameters such as geology, slope, soil, temperature, rainfall, drainage density, and lineament density. However, identifying and mapping potential groundwater zones remains challenging due to aquifer systems' complex and dynamic nature. Then, ArcGIS was incorporated with a weighted overlay, and appropriate ranks were assigned to each parameter group. Through data analysis, MCDA was applied to weigh and prioritize the different parameters based on their relative impact on groundwater potential. There were three probable groundwater zones: low potential, moderate potential, and high potential. Our analysis showed that the central and lower parts of the Bagmati River Basin have the highest potential, i.e., 7.20% of the total area. In contrast, the northern and eastern parts have lower potential. The identified potential zones can be used to guide future groundwater exploration and management strategies in the region.Keywords: groundwater, geographic information system, analytic hierarchy processes, multi-criteria decision analysis, Bagmati
Procedia PDF Downloads 1055541 Low-Complex, High-Fidelity Two-Grades Cyclo-Olefin Copolymer (COC) Based Thermal Bonding Technique for Sealing a Thermoplastic Microfluidic Biosensor
Authors: Jorge Prada, Christina Cordes, Carsten Harms, Walter Lang
Abstract:
The development of microfluidic-based biosensors over the last years has shown an increasing employ of thermoplastic polymers as constitutive material. Their low-cost production, high replication fidelity, biocompatibility and optical-mechanical properties are sought after for the implementation of disposable albeit functional lab-on-chip solutions. Among the range of thermoplastic materials on use, the Cyclo-Olefin Copolymer (COC) stands out due to its optical transparency, which makes it a frequent choice as manufacturing material for fluorescence-based biosensors. Moreover, several processing techniques to complete a closed COC microfluidic biosensor have been discussed in the literature. The reported techniques differ however in their implementation, and therefore potentially add more or less complexity when using it in a mass production process. This work introduces and reports results on the application of a purely thermal bonding process between COC substrates, which were produced by the hot-embossing process, and COC foils containing screen-printed circuits. The proposed procedure takes advantage of the transition temperature difference between two COC grades foils to accomplish the sealing of the microfluidic channels. Patterned heat injection to the COC foil through the COC substrate is applied, resulting in consistent channel geometry uniformity. Measurements on bond strength and bursting pressure are shown, suggesting that this purely thermal bonding process potentially renders a technique which can be easily adapted into the thermoplastic microfluidic chip production workflow, while enables a low-cost as well as high-quality COC biosensor manufacturing process.Keywords: biosensor, cyclo-olefin copolymer, hot embossing, thermal bonding, thermoplastics
Procedia PDF Downloads 2405540 Empirical Measures to Enhance Germination Potential and Control Browning of Tissue Cultures of Andrographis paniculata
Authors: Nidhi Jindal, Ashok Chaudhury, Manisha Mangal
Abstract:
Andrographis paniculata, (Burm f.) Wallich ex. Nees (Family Acanthaceae) popularly known as King of Bitters, is an important medicinal herb. It has an astonishingly wide range of medicinal properties such as anti-inflammatory,antidiarrhoeal, antiviral, antimalarial, hepatoprotective, cardiovascular, anticancer, and immunostimulatory activities. It is widely cultivated in southern Asia. Though propagation of this herb generally occurs through seeds, it has many germination problems which intrigued scientists to work out on the alternative techniques for its mass production. The potential of tissue culture techniques as an alternative tool for AP multiplication was found to be promising. However, the high mortality rate of explants caused by phenolic browning of explants is one of the difficulties reported. Low multiplication rates were reported in the proliferation phase, as well as cultures decline characterized by leaf fall and loss of overall vigor. In view of above problems, a study was undertaken to overcome seed dormancy to improve germination potential and to investigate further on the possible means for successful proliferation of cultures via preventive approaches to overcome failures caused by phenolic browning. Experiments were conducted to improve germination potential and among all the chemical and mechanical trials, scarification of seeds with sand paper proved to be the best method to enhance the germination potential (82.44%) within 7 days. Similarly, several pretreatments and media combinations were tried to overcome browning of explants leading to the conclusion that addition of 0.1% citric acid and 0.2% of ascorbic acid in the media followed by rapid sub culturing of explants controlled browning and decline of explants by 67.45%.Keywords: plant tissue culture, empirical measure, germination, tissue culture
Procedia PDF Downloads 4145539 Information Literacy Skills of Legal Practitioners in Khyber Pakhtunkhwa-Pakistan: An Empirical Study
Authors: Saeed Ullah Jan, Shaukat Ullah
Abstract:
Purpose of the study: The main theme of this study is to explore the information literacy skills of the law practitioners in Khyber Pakhtunkhwa-Pakistan under the heading "Information Literacy Skills of Legal Practitioners in Khyber Pakhtunkhwa-Pakistan: An Empirical Study." Research Method and Procedure: To conduct this quantitative study, the simple random sample approach is used. An adapted questionnaire is distributed among 254 lawyers of Dera Ismail Khan through personal visits and electronic means. The data collected is analyzed through SPSS (Statistical Package for Social Sciences) software. Delimitations of the study: The study is delimited to the southern district of Khyber Pakhtunkhwa: Dera Ismael Khan. Key Findings: Most of the lawyers of District Dera Ismail Khan of Khyber Pakhtunkhwa can recognize and understand the needed information. A large number of lawyers are capable of presenting information in both written and electronic forms. They are not comfortable with different legal databases and using various searching and keyword techniques. They have less knowledge of Boolean operators for locating online information. Conclusion and Recommendations: Efforts should be made to arrange refresher courses and training workshops on the utilization of different legal databases and different search techniques for retrieval of information sources. This practice will enhance the information literacy skills of lawyers, which will ultimately result in a better legal system in Pakistan. Practical implication(s): The findings of the study will motivate the policymakers and authorities of legal forums to restructure the information literacy programs to fulfill the lawyers' information needs. Contribution to the knowledge: No significant work has been done on the lawyers' information literacy skills in Khyber Pakhtunkhwa-Pakistan. It will bring a clear picture of the information literacy skills of law practitioners and address the problems faced by them during the seeking process.Keywords: information literacy-Pakistan, infromation literacy-lawyers, information literacy-lawyers-KP, law practitioners-Pakistan
Procedia PDF Downloads 1505538 Analysis of a Strengthening of a Building Reinforced Concrete Structure
Authors: Nassereddine Attari
Abstract:
Each operation to strengthen or repair requires special consideration and requires the use of methods, tools and techniques appropriate to the situation and specific problems of each of the constructs. The aim of this paper is to study the pathology of building of reinforced concrete towards the earthquake and the vulnerability assessment using a non-linear Pushover analysis and to develop curves for a medium capacity building in order to estimate the damaged condition of the building.Keywords: pushover analysis, earthquake, damage, strengthening
Procedia PDF Downloads 4305537 A Hybrid Multi-Criteria Hotel Recommender System Using Explicit and Implicit Feedbacks
Authors: Ashkan Ebadi, Adam Krzyzak
Abstract:
Recommender systems, also known as recommender engines, have become an important research area and are now being applied in various fields. In addition, the techniques behind the recommender systems have been improved over the time. In general, such systems help users to find their required products or services (e.g. books, music) through analyzing and aggregating other users’ activities and behavior, mainly in form of reviews, and making the best recommendations. The recommendations can facilitate user’s decision making process. Despite the wide literature on the topic, using multiple data sources of different types as the input has not been widely studied. Recommender systems can benefit from the high availability of digital data to collect the input data of different types which implicitly or explicitly help the system to improve its accuracy. Moreover, most of the existing research in this area is based on single rating measures in which a single rating is used to link users to items. This paper proposes a highly accurate hotel recommender system, implemented in various layers. Using multi-aspect rating system and benefitting from large-scale data of different types, the recommender system suggests hotels that are personalized and tailored for the given user. The system employs natural language processing and topic modelling techniques to assess the sentiment of the users’ reviews and extract implicit features. The entire recommender engine contains multiple sub-systems, namely users clustering, matrix factorization module, and hybrid recommender system. Each sub-system contributes to the final composite set of recommendations through covering a specific aspect of the problem. The accuracy of the proposed recommender system has been tested intensively where the results confirm the high performance of the system.Keywords: tourism, hotel recommender system, hybrid, implicit features
Procedia PDF Downloads 2725536 Comparati̇ve Study of Pi̇xel and Object-Based Image Classificati̇on Techni̇ques for Extracti̇on of Land Use/Land Cover Informati̇on
Authors: Mahesh Kumar Jat, Manisha Choudhary
Abstract:
Rapid population and economic growth resulted in changes in large-scale land use land cover (LULC) changes. Changes in the biophysical properties of the Earth's surface and its impact on climate are of primary concern nowadays. Different approaches, ranging from location-based relationships or modelling earth surface - atmospheric interaction through modelling techniques like surface energy balance (SEB) have been used in the recent past to examine the relationship between changes in Earth surface land cover and climatic characteristics like temperature and precipitation. A remote sensing-based model i.e., Surface Energy Balance Algorithm for Land (SEBAL), has been used to estimate the surface heat fluxes over Mahi Bajaj Sagar catchment (India) from 2001 to 2020. Landsat ETM and OLI satellite data are used to model the SEB of the area. Changes in observed precipitation and temperature, obtained from India Meteorological Department (IMD) have been correlated with changes in surface heat fluxes to understand the relative contributions of LULC change in changing these climatic variables. Results indicate a noticeable impact of LULC changes on climatic variables, which are aligned with respective changes in SEB components. Results suggest that precipitation increases at a rate of 20 mm/year. The maximum and minimum temperature decreases and increases at 0.007 ℃ /year and 0.02 ℃ /year, respectively. The average temperature increases at 0.009 ℃ /year. Changes in latent heat flux and sensible heat flux positively correlate with precipitation and temperature, respectively. Variation in surface heat fluxes influences the climate parameters and is an adequate reason for climate change. So, SEB modelling is helpful to understand the LULC change and its impact on climate.Keywords: remote sensing, GIS, object based, classification
Procedia PDF Downloads 1315535 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data
Authors: K. Sathishkumar, V. Thiagarasu
Abstract:
Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.Keywords: microarray technology, gene expression data, clustering, gene Selection
Procedia PDF Downloads 3235534 Waters Colloidal Phase Extraction and Preconcentration: Method Comparison
Authors: Emmanuelle Maria, Pierre Crançon, Gaëtane Lespes
Abstract:
Colloids are ubiquitous in the environment and are known to play a major role in enhancing the transport of trace elements, thus being an important vector for contaminants dispersion. Colloids study and characterization are necessary to improve our understanding of the fate of pollutants in the environment. However, in stream water and groundwater, colloids are often very poorly concentrated. It is therefore necessary to pre-concentrate colloids in order to get enough material for analysis, while preserving their initial structure. Many techniques are used to extract and/or pre-concentrate the colloidal phase from bulk aqueous phase, but yet there is neither reference method nor estimation of the impact of these different techniques on the colloids structure, as well as the bias introduced by the separation method. In the present work, we have tested and compared several methods of colloidal phase extraction/pre-concentration, and their impact on colloids properties, particularly their size distribution and their elementary composition. Ultrafiltration methods (frontal, tangential and centrifugal) have been considered since they are widely used for the extraction of colloids in natural waters. To compare these methods, a ‘synthetic groundwater’ was used as a reference. The size distribution (obtained by Field-Flow Fractionation (FFF)) and the chemical composition of the colloidal phase (obtained by Inductively Coupled Plasma Mass Spectrometry (ICPMS) and Total Organic Carbon analysis (TOC)) were chosen as comparison factors. In this way, it is possible to estimate the pre-concentration impact on the colloidal phase preservation. It appears that some of these methods preserve in a more efficient manner the colloidal phase composition while others are easier/faster to use. The choice of the extraction/pre-concentration method is therefore a compromise between efficiency (including speed and ease of use) and impact on the structural and chemical composition of the colloidal phase. In perspective, the use of these methods should enhance the consideration of colloidal phase in the transport of pollutants in environmental assessment studies and forensics.Keywords: chemical composition, colloids, extraction, preconcentration methods, size distribution
Procedia PDF Downloads 2155533 Power Quality Modeling Using Recognition Learning Methods for Waveform Disturbances
Authors: Sang-Keun Moon, Hong-Rok Lim, Jin-O Kim
Abstract:
This paper presents a Power Quality (PQ) modeling and filtering processes for the distribution system disturbances using recognition learning methods. Typical PQ waveforms with mathematical applications and gathered field data are applied to the proposed models. The objective of this paper is analyzing PQ data with respect to monitoring, discriminating, and evaluating the waveform of power disturbances to ensure the system preventative system failure protections and complex system problem estimations. Examined signal filtering techniques are used for the field waveform noises and feature extractions. Using extraction and learning classification techniques, the efficiency was verified for the recognition of the PQ disturbances with focusing on interactive modeling methods in this paper. The waveform of selected 8 disturbances is modeled with randomized parameters of IEEE 1159 PQ ranges. The range, parameters, and weights are updated regarding field waveform obtained. Along with voltages, currents have same process to obtain the waveform features as the voltage apart from some of ratings and filters. Changing loads are causing the distortion in the voltage waveform due to the drawing of the different patterns of current variation. In the conclusion, PQ disturbances in the voltage and current waveforms indicate different types of patterns of variations and disturbance, and a modified technique based on the symmetrical components in time domain was proposed in this paper for the PQ disturbances detection and then classification. Our method is based on the fact that obtained waveforms from suggested trigger conditions contain potential information for abnormality detections. The extracted features are sequentially applied to estimation and recognition learning modules for further studies.Keywords: power quality recognition, PQ modeling, waveform feature extraction, disturbance trigger condition, PQ signal filtering
Procedia PDF Downloads 1865532 Optimizing Solids Control and Cuttings Dewatering for Water-Powered Percussive Drilling in Mineral Exploration
Authors: S. J. Addinell, A. F. Grabsch, P. D. Fawell, B. Evans
Abstract:
The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising down-hole water-powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barren cover. This system has shown superior rates of penetration in water-rich, hard rock formations at depths exceeding 500 metres. With fluid flow rates of up to 120 litres per minute at 200 bar operating pressure to energise the bottom hole tooling, excessive quantities of high quality drilling fluid (water) would be required for a prolonged drilling campaign. As a result, drilling fluid recovery and recycling has been identified as a necessary option to minimise costs and logistical effort. While the majority of the cuttings report as coarse particles, a significant fines fraction will typically also be present. To maximise tool life longevity, the percussive bottom hole assembly requires high quality fluid with minimal solids loading and any recycled fluid needs to have a solids cut point below 40 microns and a concentration less than 400 ppm before it can be used to reenergise the system. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process shows a strong power law relationship for particle size distributions. This data is critical in optimising solids control strategies and cuttings dewatering techniques. Optimisation of deployable solids control equipment is discussed and how the required centrate clarity was achieved in the presence of pyrite-rich metasediment cuttings. Key results were the successful pre-aggregation of fines through the selection and use of high molecular weight anionic polyacrylamide flocculants and the techniques developed for optimal dosing prior to scroll decanter centrifugation, thus keeping sub 40 micron solids loading within prescribed limits. Experiments on maximising fines capture in the presence of thixotropic drilling fluid additives (e.g. Xanthan gum and other biopolymers) are also discussed. As no core is produced during the drilling process, it is intended that the particle laden returned drilling fluid is used for top-of-hole geochemical and mineralogical assessment. A discussion is therefore presented on the biasing and latency of cuttings representivity by dewatering techniques, as well as the resulting detrimental effects on depth fidelity and accuracy. Data pertaining to the sample biasing with respect to geochemical signatures due to particle size distributions is presented and shows that, depending on the solids control and dewatering techniques used, it can have unwanted influence on top-of-hole analysis. Strategies are proposed to overcome these effects, improving sample quality. Successful solids control and cuttings dewatering for water-powered percussive drilling is presented, contributing towards the successful advancement of coiled tubing based greenfields mineral exploration.Keywords: cuttings, dewatering, flocculation, percussive drilling, solids control
Procedia PDF Downloads 2485531 Compost Bioremediation of Oil Refinery Sludge by Using Different Manures in a Laboratory Condition
Authors: O. Ubani, H. I. Atagana, M. S. Thantsha
Abstract:
This study was conducted to measure the reduction in polycyclic aromatic hydrocarbons (PAHs) content in oil sludge by co-composting the sludge with pig, cow, horse and poultry manures under laboratory conditions. Four kilograms of soil spiked with 800 g of oil sludge was co-composted differently with each manure in a ratio of 2:1 (w/w) spiked soil:manure and wood-chips in a ratio of 2:1 (w/v) spiked soil:wood-chips. Control was set up similar as the one above but without manure. Mixtures were incubated for 10 months at room temperature. Compost piles were turned weekly and moisture level was maintained at between 50% and 70%. Moisture level, pH, temperature, CO2 evolution and oxygen consumption were measured monthly and the ash content at the end of experimentation. Bacteria capable of utilizing PAHs were isolated, purified and characterized by molecular techniques using polymerase chain reaction-denaturing gradient gel electrophoresis (PCR-DGGE), amplification of the 16S rDNA gene using the specific primers (16S-P1 PCR and 16S-P2 PCR) and the amplicons were sequenced. Extent of reduction of PAHs was measured using automated soxhlet extractor with dichloromethane as the extraction solvent coupled with gas chromatography/mass spectrometry (GC/MS). Temperature did not exceed 27.5O°C in all compost heaps, pH ranged from 5.5 to 7.8 and CO2 evolution was highest in poultry manure at 18.78 µg/dwt/day. Microbial growth and activities were enhanced. Bacteria identified were Bacillus, Arthrobacter and Staphylococcus species. Results from PAH measurements showed reduction between 77 and 99%. The results from the control experiments may be because it was invaded by fungi. Co-composting of spiked soils with animal manures enhanced the reduction in PAHs. Interestingly, all bacteria isolated and identified in this study were present in all treatments, including the control.Keywords: bioremediation, co-composting, oil refinery sludge, PAHs, bacteria spp, animal manures, molecular techniques
Procedia PDF Downloads 4755530 Experimental Evaluation of Electrocoagulation for Hardness Removal of Bore Well Water
Authors: Pooja Kumbhare
Abstract:
Water is an important resource for the survival of life. The inadequate availability of surface water makes people depend on ground water for fulfilling their needs. However, ground water is generally too hard to satisfy the requirements for domestic as well as industrial applications. Removal of hardness involves various techniques such as lime soda process, ion exchange, reverse osmosis, nano-filtration, distillation, and, evaporation, etc. These techniques have individual problems such as high annual operating cost, sediment formation on membrane, sludge disposal problem, etc. Electrocoagulation (EC) is being explored as modern and cost-effective technology to cope up with the growing demand of high water quality at the consumer end. In general, earlier studies on electrocoagulation for hardness removal are found to deploy batch processes. As batch processes are always inappropriate to deal with large volume of water to be treated, it is essential to develop continuous flow EC process. So, in the present study, an attempt is made to investigate continuous flow EC process for decreasing excessive hardness of bore-well water. The experimental study has been conducted using 12 aluminum electrodes (25cm*10cm, 1cm thick) provided in EC reactor with volume of 8 L. Bore well water sample, collected from a local bore-well (i.e. at – Vishrambag, Sangli; Maharashtra) having average initial hardness of 680 mg/l (Range: 650 – 700 mg/l), was used for the study. Continuous flow electrocoagulation experiments were carried out by varying operating parameters specifically reaction time (Range: 10 – 60 min), voltage (Range: 5 – 20 V), current (Range: 1 – 5A). Based on the experimental study, it is found that hardness removal to the desired extent could be achieved even for continuous flow EC reactor, so the use of it is found promising.Keywords: hardness, continuous flow EC process, aluminum electrode, optimal operating parameters
Procedia PDF Downloads 1785529 ESP: Peculiarities of Teaching Psychology in English to Russian Students
Authors: Ekaterina A. Redkina
Abstract:
The necessity and importance of teaching professionally oriented content in English needs no proof nowadays. Consequently, the ability to share personal ESP teaching experience seems of great importance. This paper is based on the 8-year ESP and EFL teaching experience at the Moscow State Linguistic University, Moscow, Russia, and presents theoretical analysis of specifics, possible problems, and perspectives of teaching Psychology in English to Russian psychology-students. The paper concerns different issues that are common for different ESP classrooms, and familiar to different teachers. Among them are: designing ESP curriculum (for psychologists in this case), finding the balance between content and language in the classroom, main teaching principles (the 4 C’s), the choice of assessment techniques and teaching material. The main objective of teaching psychology in English to Russian psychology students is developing knowledge and skills essential for professional psychologists. Belonging to international professional community presupposes high-level content-specific knowledge and skills, high level of linguistic skills and cross-cultural linguistic ability and finally high level of professional etiquette. Thus, teaching psychology in English pursues 3 main outcomes, such as content, language and professional skills. The paper provides explanation of each of the outcomes. Examples are also given. Particular attention is paid to the lesson structure, its objectives and the difference between a typical EFL and ESP lesson. There is also made an attempt to find commonalities between teaching ESP and CLIL. There is an approach that states that CLIL is more common for schools, while ESP is more common for higher education. The paper argues that CLIL methodology can be successfully used in ESP teaching and that many CLIL activities are also well adapted for professional purposes. The research paper provides insights into the process of teaching psychologists in Russia, real teaching experience and teaching techniques that have proved efficient over time.Keywords: ESP, CLIL, content, language, psychology in English, Russian students
Procedia PDF Downloads 6095528 Assessment of the Electrical, Mechanical, and Thermal Nociceptive Thresholds for Stimulation and Pain Measurements at the Bovine Hind Limb
Authors: Samaneh Yavari, Christiane Pferrer, Elisabeth Engelke, Alexander Starke, Juergen Rehage
Abstract:
Background: Three nociceptive thresholds of thermal, electrical, and mechanical thresholds commonly use to evaluate the local anesthesia in many species, for instance, cow, horse, cat, dog, rabbit, and so on. Due to the lack of investigations to evaluate and/or validate such those nociceptive thresholds, our plan was the comparison of two-foot local anesthesia methods of Intravenous Regional Anesthesia (IVRA) and our modified four-point Nerve Block Anesthesia (NBA). Materials and Methods: Eight healthy nonpregnant nondairy Holstein Frisian cows in a cross-over study design were selected for this study. All cows divided into two different groups to receive two local anesthesia techniques of IVRA and our modified four-point NBA. Three thermal, electrical, and mechanical force and pinpricks were applied to evaluate the quality of local anesthesia methods before and after local anesthesia application. Results: The statistical evaluation demonstrated that our four-point NBA has a qualification to select as a standard foot local anesthesia. However, the recorded results of our study revealed no significant difference between two groups of local anesthesia techniques of IVRA and modified four-point NBA related to quality and duration of anesthesia stimulated by electrical, mechanical and thermal nociceptive stimuli. Conclusion and discussion: All three nociceptive threshold stimuli of electrical, mechanical and heat nociceptive thresholds can be applied to measure and evaluate the efficacy of foot local anesthesia of dairy cows. However, our study revealed no superiority of those three nociceptive methods to evaluate the duration and quality of bovine foot local anesthesia methods. Veterinarians to investigate the duration and quality of their selected anesthesia method can use any of those heat, mechanical, and electrical methods.Keywords: mechanical, thermal, electrical threshold, IVRA, NBA, hind limb, dairy cow
Procedia PDF Downloads 2455527 Investigation of the Morphology of SiO2 Nano-Particles Using Different Synthesis Techniques
Authors: E. Gandomkar, S. Sabbaghi
Abstract:
In this paper, the effects of variation synthesized methods on morphology and size of silica nanostructure via modifying sol-gel and precipitation method have been investigated. Meanwhile, resulting products have been characterized by particle size analyzer, scanning electron microscopy (SEM), X-ray Diffraction (XRD) and Fourier transform infrared (FT-IR) spectra. As result, the shape of SiO2 with sol-gel and precipitation methods was spherical but with modifying sol-gel method we have been had nanolayer structure.Keywords: modified sol-gel, precipitation, nanolayer, Na2SiO3, nanoparticle
Procedia PDF Downloads 2925526 White Wine Discrimination Based on Deconvoluted Surface Enhanced Raman Spectroscopy Signals
Authors: Dana Alina Magdas, Nicoleta Simona Vedeanu, Ioana Feher, Rares Stiufiuc
Abstract:
Food and beverages authentication using rapid and non-expensive analytical tools represents nowadays an important challenge. In this regard, the potential of vibrational techniques in food authentication has gained an increased attention during the last years. For wines discrimination, Raman spectroscopy appears more feasible to be used as compared with IR (infrared) spectroscopy, because of the relatively weak water bending mode in the vibrational spectroscopy fingerprint range. Despite this, the use of Raman technique in wine discrimination is in an early stage. Taking this into consideration, the wine discrimination potential of surface-enhanced Raman scattering (SERS) technique is reported in the present work. The novelty of this study, compared with the previously reported studies, concerning the application of vibrational techniques in wine discrimination consists in the fact that the present work presents the wines differentiation based on the individual signals obtained from deconvoluted spectra. In order to achieve wines classification with respect to variety, geographical origin and vintage, the peaks intensities obtained after spectra deconvolution were compared using supervised chemometric methods like Linear Discriminant Analysis (LDA). For this purpose, a set of 20 white Romanian wines from different viticultural Romanian regions four varieties, was considered. Chemometric methods applied directly to row SERS experimental spectra proved their efficiency, but discrimination markers identification found to be very difficult due to the overlapped signals as well as for the band shifts. By using this approach, a better general view related to the differences that appear among the wines in terms of compositional differentiation could be reached.Keywords: chemometry, SERS, variety, wines discrimination
Procedia PDF Downloads 1605525 Gnss Aided Photogrammetry for Digital Mapping
Authors: Muhammad Usman Akram
Abstract:
This research work based on GNSS-Aided Photogrammetry for Digital Mapping. It focuses on topographic survey of an area or site which is to be used in future Planning & development (P&D) or can be used for further, examination, exploration, research and inspection. Survey and Mapping in hard-to-access and hazardous areas are very difficult by using traditional techniques and methodologies; as well it is time consuming, labor intensive and has less precision with limited data. In comparison with the advance techniques it is saving with less manpower and provides more precise output with a wide variety of multiple data sets. In this experimentation, Aerial Photogrammetry technique is used where an UAV flies over an area and captures geocoded images and makes a Three-Dimensional Model (3-D Model), UAV operates on a user specified path or area with various parameters; Flight altitude, Ground sampling distance (GSD), Image overlapping, Camera angle etc. For ground controlling, a network of points on the ground would be observed as a Ground Control point (GCP) using Differential Global Positioning System (DGPS) in PPK or RTK mode. Furthermore, that raw data collected by UAV and DGPS will be processed in various Digital image processing programs and Computer Aided Design software. From which as an output we obtain Points Dense Cloud, Digital Elevation Model (DEM) and Ortho-photo. The imagery is converted into geospatial data by digitizing over Ortho-photo, DEM is further converted into Digital Terrain Model (DTM) for contour generation or digital surface. As a result, we get Digital Map of area to be surveyed. In conclusion, we compared processed data with exact measurements taken on site. The error will be accepted if the amount of error is not breached from survey accuracy limits set by concerned institutions.Keywords: photogrammetry, post processing kinematics, real time kinematics, manual data inquiry
Procedia PDF Downloads 325524 Flood Vulnerability Zoning for Blue Nile Basin Using Geospatial Techniques
Authors: Melese Wondatir
Abstract:
Flooding ranks among the most destructive natural disasters, impacting millions of individuals globally and resulting in substantial economic, social, and environmental repercussions. This study's objective was to create a comprehensive model that assesses the Nile River basin's susceptibility to flood damage and improves existing flood risk management strategies. Authorities responsible for enacting policies and implementing measures may benefit from this research to acquire essential information about the flood, including its scope and susceptible areas. The identification of severe flood damage locations and efficient mitigation techniques were made possible by the use of geospatial data. Slope, elevation, distance from the river, drainage density, topographic witness index, rainfall intensity, distance from road, NDVI, soil type, and land use type were all used throughout the study to determine the vulnerability of flood damage. Ranking elements according to their significance in predicting flood damage risk was done using the Analytic Hierarchy Process (AHP) and geospatial approaches. The analysis finds that the most important parameters determining the region's vulnerability are distance from the river, topographic witness index, rainfall, and elevation, respectively. The consistency ratio (CR) value obtained in this case is 0.000866 (<0.1), which signifies the acceptance of the derived weights. Furthermore, 10.84m2, 83331.14m2, 476987.15m2, 24247.29m2, and 15.83m2 of the region show varying degrees of vulnerability to flooding—very low, low, medium, high, and very high, respectively. Due to their close proximity to the river, the northern-western regions of the Nile River basin—especially those that are close to Sudanese cities like Khartoum—are more vulnerable to flood damage, according to the research findings. Furthermore, the AUC ROC curve demonstrates that the categorized vulnerability map achieves an accuracy rate of 91.0% based on 117 sample points. By putting into practice strategies to address the topographic witness index, rainfall patterns, elevation fluctuations, and distance from the river, vulnerable settlements in the area can be protected, and the impact of future flood occurrences can be greatly reduced. Furthermore, the research findings highlight the urgent requirement for infrastructure development and effective flood management strategies in the northern and western regions of the Nile River basin, particularly in proximity to major towns such as Khartoum. Overall, the study recommends prioritizing high-risk locations and developing a complete flood risk management plan based on the vulnerability map.Keywords: analytic hierarchy process, Blue Nile Basin, geospatial techniques, flood vulnerability, multi-criteria decision making
Procedia PDF Downloads 715523 Comparison between High Resolution Ultrasonography and Magnetic Resonance Imaging in Assessment of Musculoskeletal Disorders Causing Ankle Pain
Authors: Engy S. El-Kayal, Mohamed M. S. Arafa
Abstract:
There are various causes of ankle pain including traumatic and non-traumatic causes. Various imaging techniques are available for assessment of AP. MRI is considered to be the imaging modality of choice for ankle joint evaluation with an advantage of its high spatial resolution, multiplanar capability, hence its ability to visualize small complex anatomical structures around the ankle. However, the high costs and the relatively limited availability of MRI systems, as well as the relatively long duration of the examination all are considered disadvantages of MRI examination. Therefore there is a need for a more rapid and less expensive examination modality with good diagnostic accuracy to fulfill this gap. HRU has become increasingly important in the assessment of ankle disorders, with advantages of being fast, reliable, of low cost and readily available. US can visualize detailed anatomical structures and assess tendinous and ligamentous integrity. The aim of this study was to compare the diagnostic accuracy of HRU with MRI in the assessment of patients with AP. We included forty patients complaining of AP. All patients were subjected to real-time HRU and MRI of the affected ankle. Results of both techniques were compared to surgical and arthroscopic findings. All patients were examined according to a defined protocol that includes imaging the tendon tears or tendinitis, muscle tears, masses, or fluid collection, ligament sprain or tears, inflammation or fluid effusion within the joint or bursa, bone and cartilage lesions, erosions and osteophytes. Analysis of the results showed that the mean age of patients was 38 years. The study comprised of 24 women (60%) and 16 men (40%). The accuracy of HRU in detecting causes of AP was 85%, while the accuracy of MRI in the detection of causes of AP was 87.5%. In conclusions: HRU and MRI are two complementary tools of investigation with the former will be used as a primary tool of investigation and the latter will be used to confirm the diagnosis and the extent of the lesion especially when surgical interference is planned.Keywords: ankle pain (AP), high-resolution ultrasound (HRU), magnetic resonance imaging (MRI) ultrasonography (US)
Procedia PDF Downloads 1905522 Comparison of Different Artificial Intelligence-Based Protein Secondary Structure Prediction Methods
Authors: Jamerson Felipe Pereira Lima, Jeane Cecília Bezerra de Melo
Abstract:
The difficulty and cost related to obtaining of protein tertiary structure information through experimental methods, such as X-ray crystallography or NMR spectroscopy, helped raising the development of computational methods to do so. An approach used in these last is prediction of tridimensional structure based in the residue chain, however, this has been proved an NP-hard problem, due to the complexity of this process, explained by the Levinthal paradox. An alternative solution is the prediction of intermediary structures, such as the secondary structure of the protein. Artificial Intelligence methods, such as Bayesian statistics, artificial neural networks (ANN), support vector machines (SVM), among others, were used to predict protein secondary structure. Due to its good results, artificial neural networks have been used as a standard method to predict protein secondary structure. Recent published methods that use this technique, in general, achieved a Q3 accuracy between 75% and 83%, whereas the theoretical accuracy limit for protein prediction is 88%. Alternatively, to achieve better results, support vector machines prediction methods have been developed. The statistical evaluation of methods that use different AI techniques, such as ANNs and SVMs, for example, is not a trivial problem, since different training sets, validation techniques, as well as other variables can influence the behavior of a prediction method. In this study, we propose a prediction method based on artificial neural networks, which is then compared with a selected SVM method. The chosen SVM protein secondary structure prediction method is the one proposed by Huang in his work Extracting Physico chemical Features to Predict Protein Secondary Structure (2013). The developed ANN method has the same training and testing process that was used by Huang to validate his method, which comprises the use of the CB513 protein data set and three-fold cross-validation, so that the comparative analysis of the results can be made comparing directly the statistical results of each method.Keywords: artificial neural networks, protein secondary structure, protein structure prediction, support vector machines
Procedia PDF Downloads 621