Search results for: end-user trained information extraction
12784 The Effects of Extraction Methods on Fat Content and Fatty Acid Profiles of Marine Fish Species
Authors: Yesim Özogul, Fethiye Takadaş, Mustafa Durmus, Yılmaz Ucar, Ali Rıza Köşker, Gulsun Özyurt, Fatih Özogul
Abstract:
It has been well documented that polyunsaturated fatty acids (PUFAs), especially eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) have beneficial effects on health, regarding prevention of cardiovascular diseases, cancer and autoimmune disorders, development the brain and retina and treatment of major depressive disorder etc. Thus, an adequate intake of omega PUFA is essential and generally marine fish are the richest sources of PUFA in human diet. Thus, this study was conducted to evaluate the efficiency of different extraction methods (Bligh and Dyer, soxhlet, microwave and ultrasonics) on the fat content and fatty acid profiles of marine fish species (Mullus babatus, Upeneus moluccensis, Mullus surmuletus, Anguilla anguilla, Pagellus erythrinus and Saurida undosquamis). Fish species were caught by trawl in Mediterranean Sea and immediately iced. After that, fish were transported to laboratory in ice and stored at -18oC in a freezer until the day of analyses. After extracting lipid from fish by different methods, lipid samples were converted to their constituent fatty acid methyl esters. The fatty acid composition was analysed by a GC Clarus 500 with an autosampler (Perkin Elmer, Shelton, CT, USA) equipped with a flame ionization detector and a fused silica capillary SGE column (30 m x 0.32 mm ID x 0.25 mm BP20 0.25 UM, USA). The results showed that there were significant differences (P < 0.05) in fatty acids of all species and also extraction methods affected fat contents and fatty acid profiles of fish species.Keywords: extraction methods, fatty acids, marine fish, PUFA
Procedia PDF Downloads 26712783 Detecting Characters as Objects Towards Character Recognition on Licence Plates
Authors: Alden Boby, Dane Brown, James Connan
Abstract:
Character recognition is a well-researched topic across disciplines. Regardless, creating a solution that can cater to multiple situations is still challenging. Vehicle licence plates lack an international standard, meaning that different countries and regions have their own licence plate format. A problem that arises from this is that the typefaces and designs from different regions make it difficult to create a solution that can cater to a wide range of licence plates. The main issue concerning detection is the character recognition stage. This paper aims to create an object detection-based character recognition model trained on a custom dataset that consists of typefaces of licence plates from various regions. Given that characters have featured consistently maintained across an array of fonts, YOLO can be trained to recognise characters based on these features, which may provide better performance than OCR methods such as Tesseract OCR.Keywords: computer vision, character recognition, licence plate recognition, object detection
Procedia PDF Downloads 12112782 Instructional Information Resources
Authors: Parveen Kumar
Abstract:
This article discusses institute information resources. Information, in its most restricted technical sense, is a sequence of symbols that can be interpreted as message information can be recorded as signs, or transmitted as signals. Information is any kind of event that affects the state of a dynamic system. Conceptually, information is the message being conveyed. This concept has numerous other meanings in different contexts. Moreover, the concept of information is closely related to notions of constraint, communication, control, data, form, instruction, knowledge, meaning, mental stimulus, pattern, perception, representation, and especially entropy.Keywords: institutions, information institutions, information services for mission-oriented institute, pattern
Procedia PDF Downloads 37612781 Classifying Facial Expressions Based on a Motion Local Appearance Approach
Authors: Fabiola M. Villalobos-Castaldi, Nicolás C. Kemper, Esther Rojas-Krugger, Laura G. Ramírez-Sánchez
Abstract:
This paper presents the classification results about exploring the combination of a motion based approach with a local appearance method to describe the facial motion caused by the muscle contractions and expansions that are presented in facial expressions. The proposed feature extraction method take advantage of the knowledge related to which parts of the face reflects the highest deformations, so we selected 4 specific facial regions at which the appearance descriptor were applied. The most common used approaches for feature extraction are the holistic and the local strategies. In this work we present the results of using a local appearance approach estimating the correlation coefficient to the 4 corresponding landmark-localized facial templates of the expression face related to the neutral face. The results let us to probe how the proposed motion estimation scheme based on the local appearance correlation computation can simply and intuitively measure the motion parameters for some of the most relevant facial regions and how these parameters can be used to recognize facial expressions automatically.Keywords: facial expression recognition system, feature extraction, local-appearance method, motion-based approach
Procedia PDF Downloads 41312780 Native Language Identification with Cross-Corpus Evaluation Using Social Media Data: ’Reddit’
Authors: Yasmeen Bassas, Sandra Kuebler, Allen Riddell
Abstract:
Native language identification is one of the growing subfields in natural language processing (NLP). The task of native language identification (NLI) is mainly concerned with predicting the native language of an author’s writing in a second language. In this paper, we investigate the performance of two types of features; content-based features vs. content independent features, when they are evaluated on a different corpus (using social media data “Reddit”). In this NLI task, the predefined models are trained on one corpus (TOEFL), and then the trained models are evaluated on different data using an external corpus (Reddit). Three classifiers are used in this task; the baseline, linear SVM, and logistic regression. Results show that content-based features are more accurate and robust than content independent ones when tested within the corpus and across corpus.Keywords: NLI, NLP, content-based features, content independent features, social media corpus, ML
Procedia PDF Downloads 13712779 Road Accidents Bigdata Mining and Visualization Using Support Vector Machines
Authors: Usha Lokala, Srinivas Nowduri, Prabhakar K. Sharma
Abstract:
Useful information has been extracted from the road accident data in United Kingdom (UK), using data analytics method, for avoiding possible accidents in rural and urban areas. This analysis make use of several methodologies such as data integration, support vector machines (SVM), correlation machines and multinomial goodness. The entire datasets have been imported from the traffic department of UK with due permission. The information extracted from these huge datasets forms a basis for several predictions, which in turn avoid unnecessary memory lapses. Since data is expected to grow continuously over a period of time, this work primarily proposes a new framework model which can be trained and adapt itself to new data and make accurate predictions. This work also throws some light on use of SVM’s methodology for text classifiers from the obtained traffic data. Finally, it emphasizes the uniqueness and adaptability of SVMs methodology appropriate for this kind of research work.Keywords: support vector mechanism (SVM), machine learning (ML), support vector machines (SVM), department of transportation (DFT)
Procedia PDF Downloads 27412778 Research on Hangzhou Commercial Center System Based on Point of Interest Data
Authors: Chen Wang, Qiuxiao Chen
Abstract:
With the advent of the information age and the era of big data, urban planning research is no longer satisfied with the analysis and application of traditional data. Because of the limitations of traditional urban commercial center system research, big data provides new opportunities for urban research. Therefore, based on the quantitative evaluation method of big data, the commercial center system of the main city of Hangzhou is analyzed and evaluated, and the scale and hierarchical structure characteristics of the urban commercial center system are studied. In order to make up for the shortcomings of the existing POI extraction method, it proposes a POI extraction method based on adaptive adjustment of search window, which can accurately and efficiently extract the POI data of commercial business in the main city of Hangzhou. Through the visualization and nuclear density analysis of the extracted Point of Interest (POI) data, the current situation of the commercial center system in the main city of Hangzhou is evaluated. Then it compares with the commercial center system structure of 'Hangzhou City Master Plan (2001-2020)', analyzes the problems existing in the planned urban commercial center system, and provides corresponding suggestions and optimization strategy for the optimization of the planning of Hangzhou commercial center system. Then get the following conclusions: The status quo of the commercial center system in the main city of Hangzhou presents a first-level main center, a two-level main center, three third-level sub-centers, and multiple community-level business centers. Generally speaking, the construction of the main center in the commercial center system is basically up to standard, and there is still a big gap in the construction of the sub-center and the regional-level commercial center, further construction is needed. Therefore, it proposes an optimized hierarchical functional system, organizes commercial centers in an orderly manner; strengthens the central radiation to drive surrounding areas; implements the construction guidance of the center, effectively promotes the development of group formation and further improves the commercial center system structure of the main city of Hangzhou.Keywords: business center system, business format, main city of Hangzhou, POI extraction method
Procedia PDF Downloads 14012777 Extraction of Saponins and Cyclopeptides from Cow Cockle (Vaccaria hispanica (Mill.) Rauschert) Seeds Grown in Turkey
Authors: Ihsan Burak Cam, Ferhan Balci-Torun, Ayhan Topuz, Esin Ari, Ismail Gokhan Deniz, Ilker Genc
Abstract:
The seeds of Vaccaria hispanica have been used in food and pharmaceutical industry. It is an important product due to its superior starch granules, triterpenic saponins, and cyclopeptides suitable for drug delivery. V. hispanica naturally grows in different climatic regions and has genotypes that differ in terms of seed content and composition. Sixty-six V. hispanica seed specimens were collected based on the representation of the distribution in all regions of Turkey and the determination of possible genotypic differences between regions. The seeds, collected from each of the 66 locations, were grown in greenhouse conditions in Akdeniz University, Antalya. Saponin and cyclopeptide contents of the V. hispanica seeds were determined after harvest. Accelerated solvent extraction (ASE) was applied for the extraction of saponins and cyclopeptides. Cyclopeptide (segetalin A) and saponin content of V. hispanica seeds were found in the range of 0.165-0.654 g/100 g and 0.15-1.14 g/100 g, respectively. The results were found to be promising for the seeds from Turkey in terms of saponin content and quality. Acknowledgment: This study was supported by the Scientific and Research Council of Turkey (TUBITAK) (project no 112 O 136).Keywords: Vaccaria hispanica, saponin, cyclopeptid, cow cockle seeds
Procedia PDF Downloads 29512776 Network Word Discovery Framework Based on Sentence Semantic Vector Similarity
Authors: Ganfeng Yu, Yuefeng Ma, Shanliang Yang
Abstract:
The word discovery is a key problem in text information retrieval technology. Methods in new word discovery tend to be closely related to words because they generally obtain new word results by analyzing words. With the popularity of social networks, individual netizens and online self-media have generated various network texts for the convenience of online life, including network words that are far from standard Chinese expression. How detect network words is one of the important goals in the field of text information retrieval today. In this paper, we integrate the word embedding model and clustering methods to propose a network word discovery framework based on sentence semantic similarity (S³-NWD) to detect network words effectively from the corpus. This framework constructs sentence semantic vectors through a distributed representation model, uses the similarity of sentence semantic vectors to determine the semantic relationship between sentences, and finally realizes network word discovery by the meaning of semantic replacement between sentences. The experiment verifies that the framework not only completes the rapid discovery of network words but also realizes the standard word meaning of the discovery of network words, which reflects the effectiveness of our work.Keywords: text information retrieval, natural language processing, new word discovery, information extraction
Procedia PDF Downloads 9512775 Synthesis, Characterization, and Application of Novel Trihexyltetradecyl Phosphonium Chloride for Extractive Desulfurization of Liquid Fuel
Authors: Swapnil A. Dharaskar, Kailas L. Wasewar, Mahesh N. Varma, Diwakar Z. Shende
Abstract:
Owing to the stringent environmental regulations in many countries for production of ultra low sulfur petroleum fractions intending to reduce sulfur emissions results in enormous interest in this area among the scientific community. The requirement of zero sulfur emissions enhances the prominence for more advanced techniques in desulfurization. Desulfurization by extraction is a promising approach having several advantages over conventional hydrodesulphurization. Present work is dealt with various new approaches for desulfurization of ultra clean gasoline, diesel and other liquid fuels by extraction with ionic liquids. In present paper experimental data on extractive desulfurization of liquid fuel using trihexyl tetradecyl phosphonium chloride has been presented. The FTIR, 1H-NMR, and 13C-NMR have been discussed for the molecular confirmation of synthesized ionic liquid. Further, conductivity, solubility, and viscosity analysis of ionic liquids were carried out. The effects of reaction time, reaction temperature, sulfur compounds, ultrasonication, and recycling of ionic liquid without regeneration on removal of dibenzothiphene from liquid fuel were also investigated. In extractive desulfurization process, the removal of dibenzothiophene in n-dodecane was 84.5% for mass ratio of 1:1 in 30 min at 30OC under the mild reaction conditions. Phosphonium ionic liquids could be reused five times without a significant decrease in activity. Also, the desulfurization of real fuels, multistage extraction was examined. The data and results provided in present paper explore the significant insights of phosphonium based ionic liquids as novel extractant for extractive desulfurization of liquid fuels.Keywords: ionic liquid, PPIL, desulfurization, liquid fuel, extraction
Procedia PDF Downloads 60912774 A Hybrid Digital Watermarking Scheme
Authors: Nazish Saleem Abbas, Muhammad Haris Jamil, Hamid Sharif
Abstract:
Digital watermarking is a technique that allows an individual to add and hide secret information, copyright notice, or other verification message inside a digital audio, video, or image. Today, with the advancement of technology, modern healthcare systems manage patients’ diagnostic information in a digital way in many countries. When transmitted between hospitals through the internet, the medical data becomes vulnerable to attacks and requires security and confidentiality. Digital watermarking techniques are used in order to ensure the authenticity, security and management of medical images and related information. This paper proposes a watermarking technique that embeds a watermark in medical images imperceptibly and securely. In this work, digital watermarking on medical images is carried out using the Least Significant Bit (LSB) with the Discrete Cosine Transform (DCT). The proposed methods of embedding and extraction of a watermark in a watermarked image are performed in the frequency domain using LSB by XOR operation. The quality of the watermarked medical image is measured by the Peak signal-to-noise ratio (PSNR). It was observed that the watermarked medical image obtained performing XOR operation between DCT and LSB survived compression attack having a PSNR up to 38.98.Keywords: watermarking, image processing, DCT, LSB, PSNR
Procedia PDF Downloads 4712773 A Two-Step Framework for Unsupervised Speaker Segmentation Using BIC and Artificial Neural Network
Authors: Ahmad Alwosheel, Ahmed Alqaraawi
Abstract:
This work proposes a new speaker segmentation approach for two speakers. It is an online approach that does not require a prior information about speaker models. It has two phases, a conventional approach such as unsupervised BIC-based is utilized in the first phase to detect speaker changes and train a Neural Network, while in the second phase, the output trained parameters from the Neural Network are used to predict next incoming audio stream. Using this approach, a comparable accuracy to similar BIC-based approaches is achieved with a significant improvement in terms of computation time.Keywords: artificial neural network, diarization, speaker indexing, speaker segmentation
Procedia PDF Downloads 50212772 Development of a Computer Aided Diagnosis Tool for Brain Tumor Extraction and Classification
Authors: Fathi Kallel, Abdulelah Alabd Uljabbar, Abdulrahman Aldukhail, Abdulaziz Alomran
Abstract:
The brain is an important organ in our body since it is responsible about the majority actions such as vision, memory, etc. However, different diseases such as Alzheimer and tumors could affect the brain and conduct to a partial or full disorder. Regular diagnosis are necessary as a preventive measure and could help doctors to early detect a possible trouble and therefore taking the appropriate treatment, especially in the case of brain tumors. Different imaging modalities are proposed for diagnosis of brain tumor. The powerful and most used modality is the Magnetic Resonance Imaging (MRI). MRI images are analyzed by doctor in order to locate eventual tumor in the brain and describe the appropriate and needed treatment. Diverse image processing methods are also proposed for helping doctors in identifying and analyzing the tumor. In fact, a large Computer Aided Diagnostic (CAD) tools including developed image processing algorithms are proposed and exploited by doctors as a second opinion to analyze and identify the brain tumors. In this paper, we proposed a new advanced CAD for brain tumor identification, classification and feature extraction. Our proposed CAD includes three main parts. Firstly, we load the brain MRI. Secondly, a robust technique for brain tumor extraction is proposed. This technique is based on both Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA). DWT is characterized by its multiresolution analytic property, that’s why it was applied on MRI images with different decomposition levels for feature extraction. Nevertheless, this technique suffers from a main drawback since it necessitates a huge storage and is computationally expensive. To decrease the dimensions of the feature vector and the computing time, PCA technique is considered. In the last stage, according to different extracted features, the brain tumor is classified into either benign or malignant tumor using Support Vector Machine (SVM) algorithm. A CAD tool for brain tumor detection and classification, including all above-mentioned stages, is designed and developed using MATLAB guide user interface.Keywords: MRI, brain tumor, CAD, feature extraction, DWT, PCA, classification, SVM
Procedia PDF Downloads 25012771 Single and Sequential Extraction for Potassium Fractionation and Nano-Clay Flocculation Structure
Authors: Chakkrit Poonpakdee, Jing-Hua Tzen, Ya-Zhen Huang, Yao-Tung Lin
Abstract:
Potassium (K) is a known macro nutrient and essential element for plant growth. Single leaching and modified sequential extraction schemes have been developed to estimate the relative phase associations of soil samples. The sequential extraction process is a step in analyzing the partitioning of metals affected by environmental conditions, but it is not a tool for estimation of K bioavailability. While, traditional single leaching method has been used to classify K speciation for a long time, it depend on its availability to the plants and use for potash fertilizer recommendation rate. Clay mineral in soil is a factor for controlling soil fertility. The change of the micro-structure of clay minerals during various environment (i.e. swelling or shrinking) is characterized using Transmission X-Ray Microscopy (TXM). The objective of this study are to 1) compare the distribution of K speciation between single leaching and sequential extraction process 2) determined clay particle flocculation structure before/after suspension with K+ using TXM. Four tropical soil samples: farming without K fertilizer (10 years), long term applied K fertilizer (10 years; 168-240 kg K2O ha-1 year-1), red soil (450-500 kg K2O ha-1 year-1) and forest soil were selected. The results showed that the amount of K speciation by single leaching method were high in mineral K, HNO3 K, Non-exchangeable K, NH4OAc K, exchangeable K and water soluble K respectively. Sequential extraction process indicated that most K speciations in soil were associated with residual, organic matter, Fe or Mn oxide and exchangeable fractions and K associate fraction with carbonate was not detected in tropical soil samples. In farming long term applied K fertilizer and red soil were higher exchangeable K than farming long term without K fertilizer and forest soil. The results indicated that one way to increase the available K (water soluble K and exchangeable K) should apply K fertilizer and organic fertilizer for providing available K. The two-dimension of TXM image of clay particles suspension with K+ shows that the aggregation structure of clay mineral closed-void cellular networks. The porous cellular structure of soil aggregates in 1 M KCl solution had large and very larger empty voids than in 0.025 M KCl and deionized water respectively. TXM nanotomography is a new technique can be useful in the field as a tool for better understanding of clay mineral micro-structure.Keywords: potassium, sequential extraction process, clay mineral, TXM
Procedia PDF Downloads 29012770 Solvent Extraction, Spectrophotometric Determination of Antimony(III) from Real Samples and Synthetic Mixtures Using O-Methylphenyl Thiourea as a Sensitive Reagent
Authors: Shashikant R. Kuchekar, Shivaji D. Pulate, Vishwas B. Gaikwad
Abstract:
A simple and selective method is developed for solvent extraction spectrophotometric determination of antimony(III) using O-Methylphenyl Thiourea (OMPT) as a sensitive chromogenic chelating agent. The basis of proposed method is formation of antimony(III)-OMPT complex was extracted with 0.0025 M OMPT in chloroform from aqueous solution of antimony(III) in 1.0 M perchloric acid. The absorbance of this complex was measured at 297 nm against reagent blank. Beer’s law was obeyed up to 15µg mL-1 of antimony(III). The Molar absorptivity and Sandell’s sensitivity of the antimony(III)-OMPT complex in chloroform are 16.6730 × 103 L mol-1 cm-1 and 0.00730282 µg cm-2 respectively. The stoichiometry of antimony(III)-OMPT complex was established from slope ratio method, mole ratio method and Job’s continuous variation method was 1:2. The complex was stable for more than 48 h. The interfering effect of various foreign ions was studied and suitable masking agents are used wherever necessary to enhance selectivity of the method. The proposed method is successfully applied for determination of antimony(III) from real samples alloy and synthetic mixtures. Repetition of the method was checked by finding relative standard deviation (RSD) for 10 determinations which was 0.42%.Keywords: solvent extraction, antimony, spectrophotometry, real sample analysis
Procedia PDF Downloads 33212769 Integrating Virtual Reality and Building Information Model-Based Quantity Takeoffs for Supporting Construction Management
Authors: Chin-Yu Lin, Kun-Chi Wang, Shih-Hsu Wang, Wei-Chih Wang
Abstract:
A construction superintendent needs to know not only the amount of quantities of cost items or materials completed to develop a daily report or calculate the daily progress (earned value) in each day, but also the amount of quantities of materials (e.g., reinforced steel and concrete) to be ordered (or moved into the jobsite) for performing the in-progress or ready-to-start construction activities (e.g., erection of reinforced steel and concrete pouring). These daily construction management tasks require great effort in extracting accurate quantities in a short time (usually must be completed right before getting off work every day). As a result, most superintendents can only provide these quantity data based on either what they see on the site (high inaccuracy) or the extraction of quantities from two-dimension (2D) construction drawings (high time consumption). Hence, the current practice of providing the amount of quantity data completed in each day needs improvement in terms of more accuracy and efficiency. Recently, a three-dimension (3D)-based building information model (BIM) technique has been widely applied to support construction quantity takeoffs (QTO) process. The capability of virtual reality (VR) allows to view a building from the first person's viewpoint. Thus, this study proposes an innovative system by integrating VR (using 'Unity') and BIM (using 'Revit') to extract quantities to support the above daily construction management tasks. The use of VR allows a system user to be present in a virtual building to more objectively assess the construction progress in the office. This VR- and BIM-based system is also facilitated by an integrated database (consisting of the information and data associated with the BIM model, QTO, and costs). In each day, a superintendent can work through a BIM-based virtual building to quickly identify (via a developed VR shooting function) the building components (or objects) that are in-progress or finished in the jobsite. And he then specifies a percentage (e.g., 20%, 50% or 100%) of completion of each identified building object based on his observation on the jobsite. Next, the system will generate the completed quantities that day by multiplying the specified percentage by the full quantities of the cost items (or materials) associated with the identified object. A building construction project located in northern Taiwan is used as a case study to test the benefits (i.e., accuracy and efficiency) of the proposed system in quantity extraction for supporting the development of daily reports and the orders of construction materials.Keywords: building information model, construction management, quantity takeoffs, virtual reality
Procedia PDF Downloads 13212768 Comparison of Soil Test Extractants for Determination of Available Soil Phosphorus
Authors: Violina Angelova, Stefan Krustev
Abstract:
The aim of this work was to evaluate the effectiveness of different soil test extractants for the determination of available soil phosphorus in five internationally certified standard soils, sludge and clay (NCS DC 85104, NCS DC 85106, ISE 859, ISE 952, ISE 998). The certified samples were extracted with the following methods/extractants: CaCl₂, CaCl₂ and DTPA (CAT), double lactate (DL), ammonium lactate (AL), calcium acetate lactate (CAL), Olsen, Mehlich 3, Bray and Kurtz I, and Morgan, which are commonly used in soil testing laboratories. The phosphorus in soil extracts was measured colorimetrically using Spectroquant Pharo 100 spectrometer. The methods used in the study were evaluated according to the recovery of available phosphorus, facility of application and rapidity of performance. The relationships between methods are examined statistically. A good agreement of the results from different soil test was established for all certified samples. In general, the P values extracted by the nine extraction methods significantly correlated with each other. When grouping the soils according to pH, organic carbon content and clay content, weaker extraction methods showed analogous trends; also among the stronger extraction methods, common tendencies were found. Other factors influencing the extraction force of the different methods include soil: solution ratio, as well as the duration and power of shaking the samples. The mean extractable P in certified samples was found to be in the order of CaCl₂ < CAT < Morgan < Bray and Kurtz I < Olsen < CAL < DL < Mehlich 3 < AL. Although the nine methods extracted different amounts of P from the certified samples, values of P extracted by the different methods were strongly correlated among themselves. Acknowledgment: The financial support by the Bulgarian National Science Fund Projects DFNI Н04/9 and DFNI Н06/21 are greatly appreciated.Keywords: available soil phosphorus, certified samples, determination, soil test extractants
Procedia PDF Downloads 15112767 Chitosan Magnetic Nanoparticles and Its Analytical Applications
Authors: Eman Alzahrani
Abstract:
Efficient extraction of proteins by removing interfering materials is necessary in proteomics, since most instruments cannot handle such contaminated sample matrices directly. In this study, chitosan-coated magnetic nanoparticles (CS-MNPs) for purification of myoglobin were successfully fabricated. First, chitosan (CS) was prepared by a deacetylation reaction during its extraction from shrimp-shell waste. Second, magnetic nanoparticles (MNPs) were synthesised, using the coprecipitation method, from aqueous Fe2+ and Fe3+ salt solutions by the addition of a base under an inert atmosphere, followed by modification of the surface of MNPs with chitosan. The morphology of the formed nanoparticles, which were about 23 nm in average diameter, was observed by transmission electron microscopy (TEM). In addition, nanoparticles were characterised using X-ray diffraction patterns (XRD), which showed the naked magnetic nanoparticles have a spinel structure and the surface modification did not result in phase change of the Fe3O4. The coating of MNPs was also demonstrated by scanning electron microscopy (SEM) analysis, energy dispersive analysis of X-ray spectroscopy (EDAX), and Fourier transform infrared (FT-IR) spectroscopy. The adsorption behaviour of MNPs and CS-MNPs towards myoglobin was investigated. It was found that the difference in adsorption capacity between MNPs and CS-MNPs was larger for CS-MNPs. This result makes CS-MNPs good adsorbents and attractive for using in protein extraction from biological samples.Keywords: chitosan, magnetic nanoparticles, coprecipitation, adsorption
Procedia PDF Downloads 41612766 Bridging the Gaping Levels of Information Entree for Visually Impaired Students in the Sri Lankan University Libraries
Authors: Wilfred Jeyatheese Jeyaraj
Abstract:
Education is a key determinant of future success, and every person deserves non-discriminant access to information for educational inevitabilities in any case. Analysing and understanding complex information is a crucial learning tool, especially for students. In order to compete equally with sighted students, visually impaired students require the unhinged access to access to all the available information resources. When the education of visually impaired students comes to a focal point, it can be stated that visually impaired students encounter several obstacles and barriers before they enter the university and during their time there as students. These obstacles and barriers are spread across technical, organizational and social arenas. This study reveals the possible approaches to absorb and benefit from the information provided by the Sri Lankan University Libraries for visually impaired students. Purposive sampling technique was used to select sample visually impaired students attached to the Sri Lankan National universities. There are 07 National universities which accommodate the visually impaired students and with the identified data, they were selected for this study and 80 visually impaired students were selected as the sample group. Descriptive type survey method was used to collect data. Structured questionnaires, interviews and direct observation were used as research instruments. As far as the Sri Lankan context spread is concerned, visually impaired students are able to finish their courses through their own determination to overcome the barriers they encounter on their way to graduation, through moral and practical support from their own friends and very often through a high level of creativity. According to the findings there are no specially trained university librarians to serve visually impaired users and less number of assistive technology equipment are available at present. This paper enables all university libraries in Sri Lanka to be informed about the social isolation of visually compromised students at the Sri Lankan universities and focuses on the rectification issues by considering their distinct case for interaction.Keywords: information access, Sri Lanka, university libraries, visual impairment
Procedia PDF Downloads 23512765 Level Set Based Extraction and Update of Lake Contours Using Multi-Temporal Satellite Images
Authors: Yindi Zhao, Yun Zhang, Silu Xia, Lixin Wu
Abstract:
The contours and areas of water surfaces, especially lakes, often change due to natural disasters and construction activities. It is an effective way to extract and update water contours from satellite images using image processing algorithms. However, to produce optimal water surface contours that are close to true boundaries is still a challenging task. This paper compares the performances of three different level set models, including the Chan-Vese (CV) model, the signed pressure force (SPF) model, and the region-scalable fitting (RSF) energy model for extracting lake contours. After experiment testing, it is indicated that the RSF model, in which a region-scalable fitting (RSF) energy functional is defined and incorporated into a variational level set formulation, is superior to CV and SPF, and it can get desirable contour lines when there are “holes” in the regions of waters, such as the islands in the lake. Therefore, the RSF model is applied to extracting lake contours from Landsat satellite images. Four temporal Landsat satellite images of the years of 2000, 2005, 2010, and 2014 are used in our study. All of them were acquired in May, with the same path/row (121/036) covering Xuzhou City, Jiangsu Province, China. Firstly, the near infrared (NIR) band is selected for water extraction. Image registration is conducted on NIR bands of different temporal images for information update, and linear stretching is also done in order to distinguish water from other land cover types. Then for the first temporal image acquired in 2000, lake contours are extracted via the RSF model with initialization of user-defined rectangles. Afterwards, using the lake contours extracted the previous temporal image as the initialized values, lake contours are updated for the current temporal image by means of the RSF model. Meanwhile, the changed and unchanged lakes are also detected. The results show that great changes have taken place in two lakes, i.e. Dalong Lake and Panan Lake, and RSF can actually extract and effectively update lake contours using multi-temporal satellite image.Keywords: level set model, multi-temporal image, lake contour extraction, contour update
Procedia PDF Downloads 36612764 Nexus Between Library and Information Science Education Training and Practice in Nigeria: A Critical Assessment of the Synergy
Authors: Adebayo Emmanuel Layi
Abstract:
Library and Information Science Education is about six (6) decades old in Nigeria. The first Library School was established in 1962 at the University of Ibadan, and since then, several institutions have been running the programme under various certifications, providing the manpower needs of professionals for libraries. As at June 2023, Nigeria has close to a thousand (1000) tertiary institutions and all needing the services of librarians. Apart from the tertiary institutions, several libraries exit in various establishments, both government, private and non-governmental organisations. These has underscored the enormous need for trained librarians for the libraries in these places. The Nexus between LIS Education training and Practice is like a puzzle of egg and chick, which one came first and against this background, this paper examined the roles of the colonial masters in educational development in Africa and vis-à-vis the influence of great library educators such as Melvil Dewey and other educators and the journey through Nigeria institutions. Despite the sound footing of LIS Education, Noise which seems to be a major obstacle on the practice as well as mending the broken link were all examined in the paper. Strategies and the way forward for overall development are suggested.Keywords: nexus, education, training, synergy
Procedia PDF Downloads 9312763 A New Approach of Preprocessing with SVM Optimization Based on PSO for Bearing Fault Diagnosis
Authors: Tawfik Thelaidjia, Salah Chenikher
Abstract:
Bearing fault diagnosis has attracted significant attention over the past few decades. It consists of two major parts: vibration signal feature extraction and condition classification for the extracted features. In this paper, feature extraction from faulty bearing vibration signals is performed by a combination of the signal’s Kurtosis and features obtained through the preprocessing of the vibration signal samples using Db2 discrete wavelet transform at the fifth level of decomposition. In this way, a 7-dimensional vector of the vibration signal feature is obtained. After feature extraction from vibration signal, the support vector machine (SVM) was applied to automate the fault diagnosis procedure. To improve the classification accuracy for bearing fault prediction, particle swarm optimization (PSO) is employed to simultaneously optimize the SVM kernel function parameter and the penalty parameter. The results have shown feasibility and effectiveness of the proposed approachKeywords: condition monitoring, discrete wavelet transform, fault diagnosis, kurtosis, machine learning, particle swarm optimization, roller bearing, rotating machines, support vector machine, vibration measurement
Procedia PDF Downloads 43712762 Study Mercapto-Nanoscavenger as a Promising Analytical Tool
Authors: Mohammed M. Algaradah
Abstract:
A chelating mercapto- nanoscavenger has been developed exploiting the high surface area of monodisperse nano-sized mesoporous silica. The nanoscavenger acts as a solid phase trace metal extractant whilst suspended as a quasi-stable sol in aqueous samples. This mode of extraction requires no external agitation as the particles move naturally through the sample by Brownian motion, convection and slow sedimentation. Careful size selection enables the nanoscavenger to be easily recovered together with the extracted analyte by conventional filtration or centrifugation. The research describes the successful attachment of chelator mercapto to ca. 136 ± 15 nm high surface area (BET surface area = 1006 m2 g-1) mesoporous silica particles. The resulting material had a copper capacity of ca. 1.34 ± 0.10 mmol g-1 and was successfully applied to the collection of a trace element from water. Essentially complete recovery of Cu (II) has been achieved from freshwater samples giving typical preconcentration factors of 100 from 50 µg/l samples. Data obtained from a nanoscavenger-based extraction of copper from samples were not significantly different from those obtained by using a conventional colorimetric procedure employing complexation/solvent extraction.Keywords: nano scavenger, mesoporous silica, trace metal, preconcentration
Procedia PDF Downloads 8312761 An Automated System for the Detection of Citrus Greening Disease Based on Visual Descriptors
Authors: Sidra Naeem, Ayesha Naeem, Sahar Rahim, Nadia Nawaz Qadri
Abstract:
Citrus greening is a bacterial disease that causes considerable damage to citrus fruits worldwide. Efficient method for this disease detection must be carried out to minimize the production loss. This paper presents a pattern recognition system that comprises three stages for the detection of citrus greening from Orange leaves: segmentation, feature extraction and classification. Image segmentation is accomplished by adaptive thresholding. The feature extraction stage comprises of three visual descriptors i.e. shape, color and texture. From shape feature we have used asymmetry index, from color feature we have used histogram of Cb component from YCbCr domain and from texture feature we have used local binary pattern. Classification was done using support vector machines and k nearest neighbors. The best performances of the system is Accuracy = 88.02% and AUROC = 90.1% was achieved by automatic segmented images. Our experiments validate that: (1). Segmentation is an imperative preprocessing step for computer assisted diagnosis of citrus greening, and (2). The combination of shape, color and texture features form a complementary set towards the identification of citrus greening disease.Keywords: citrus greening, pattern recognition, feature extraction, classification
Procedia PDF Downloads 18412760 Email Phishing Detection Using Natural Language Processing and Convolutional Neural Network
Abstract:
Phishing is one of the oldest and best known scams on the Internet. It can be defined as any type of telecommunications fraud that uses social engineering tricks to obtain confidential data from its victims. It’s a cybercrime aimed at stealing your sensitive information. Phishing is generally done via private email, so scammers impersonate large companies or other trusted entities to encourage victims to voluntarily provide information such as login credentials or, worse yet, credit card numbers. The COVID-19 theme is used by cybercriminals in multiple malicious campaigns like phishing. In this environment, messaging filtering solutions have become essential to protect devices that will now be used outside of the secure perimeter. Despite constantly updating methods to avoid these cyberattacks, the end result is currently insufficient. Many researchers are looking for optimal solutions to filter phishing emails, but we still need good results. In this work, we concentrated on solving the problem of detecting phishing emails using the different steps of NLP preprocessing, and we proposed and trained a model using one-dimensional CNN. Our study results show that our model obtained an accuracy of 99.99%, which demonstrates how well our model is working.Keywords: phishing, e-mail, NLP preprocessing, CNN, e-mail filtering
Procedia PDF Downloads 12612759 Managers’ Mobile Information Behavior in an Openness Paradigm Era
Authors: Abd Latif Abdul Rahman, Zuraidah Arif, Muhammad Faizal Iylia, Mohd Ghazali, Asmadi Mohammed Ghazali
Abstract:
Mobile information is a significant access point for human information activities. Theories and models of human information behavior have developed over several decades but have not yet considered the role of the user’s computing device in digital information interactions. This paper reviews the literature that leads to developing a conceptual framework of a study on the managers mobile information behavior. Based on the literature review, dimensions of mobile information behavior are identified, namely, dimension information needs, dimension information access, information retrieval and dimension of information use. The study is significant to understand the nature of librarians’ behavior in searching, retrieving and using information via the mobile device. Secondly, the study would provide suggestions about various kinds of mobile applications which organization can provide for their staff to improve their services.Keywords: mobile information behavior, information behavior, mobile information, mobile devices
Procedia PDF Downloads 34912758 AS-Geo: Arbitrary-Sized Image Geolocalization with Learnable Geometric Enhancement Resizer
Authors: Huayuan Lu, Chunfang Yang, Ma Zhu, Baojun Qi, Yaqiong Qiao, Jiangqian Xu
Abstract:
Image geolocalization has great application prospects in fields such as autonomous driving and virtual/augmented reality. In practical application scenarios, the size of the image to be located is not fixed; it is impractical to train different networks for all possible sizes. When its size does not match the size of the input of the descriptor extraction model, existing image geolocalization methods usually directly scale or crop the image in some common ways. This will result in the loss of some information important to the geolocalization task, thus affecting the performance of the image geolocalization method. For example, excessive down-sampling can lead to blurred building contour, and inappropriate cropping can lead to the loss of key semantic elements, resulting in incorrect geolocation results. To address this problem, this paper designs a learnable image resizer and proposes an arbitrary-sized image geolocation method. (1) The designed learnable image resizer employs the self-attention mechanism to enhance the geometric features of the resized image. Firstly, it applies bilinear interpolation to the input image and its feature maps to obtain the initial resized image and the resized feature maps. Then, SKNet (selective kernel net) is used to approximate the best receptive field, thus keeping the geometric shapes as the original image. And SENet (squeeze and extraction net) is used to automatically select the feature maps with strong contour information, enhancing the geometric features. Finally, the enhanced geometric features are fused with the initial resized image, to obtain the final resized images. (2) The proposed image geolocalization method embeds the above image resizer as a fronting layer of the descriptor extraction network. It not only enables the network to be compatible with arbitrary-sized input images but also enhances the geometric features that are crucial to the image geolocalization task. Moreover, the triplet attention mechanism is added after the first convolutional layer of the backbone network to optimize the utilization of geometric elements extracted by the first convolutional layer. Finally, the local features extracted by the backbone network are aggregated to form image descriptors for image geolocalization. The proposed method was evaluated on several mainstream datasets, such as Pittsburgh30K, Tokyo24/7, and Places365. The results show that the proposed method has excellent size compatibility and compares favorably to recently mainstream geolocalization methods.Keywords: image geolocalization, self-attention mechanism, image resizer, geometric feature
Procedia PDF Downloads 21412757 Model-Based Field Extraction from Different Class of Administrative Documents
Authors: Jinen Daghrir, Anis Kricha, Karim Kalti
Abstract:
The amount of incoming administrative documents is massive and manually processing these documents is a costly task especially on the timescale. In fact, this problem has led an important amount of research and development in the context of automatically extracting fields from administrative documents, in order to reduce the charges and to increase the citizen satisfaction in administrations. In this matter, we introduce an administrative document understanding system. Given a document in which a user has to select fields that have to be retrieved from a document class, a document model is automatically built. A document model is represented by an attributed relational graph (ARG) where nodes represent fields to extract, and edges represent the relation between them. Both of vertices and edges are attached with some feature vectors. When another document arrives to the system, the layout objects are extracted and an ARG is generated. The fields extraction is translated into a problem of matching two ARGs which relies mainly on the comparison of the spatial relationships between layout objects. Experimental results yield accuracy rates from 75% to 100% tested on eight document classes. Our proposed method has a good performance knowing that the document model is constructed using only one single document.Keywords: administrative document understanding, logical labelling, logical layout analysis, fields extraction from administrative documents
Procedia PDF Downloads 21312756 Impacts of Climate Change and Natural Gas Operations on the Hydrology of Northeastern BC, Canada: Quantifying the Water Budget for Coles Lake
Authors: Sina Abadzadesahraei, Stephen Déry, John Rex
Abstract:
Climate research has repeatedly identified strong associations between anthropogenic emissions of ‘greenhouses gases’ and observed increases of global mean surface air temperature over the past century. Studies have also demonstrated that the degree of warming varies regionally. Canada is not exempt from this situation, and evidence is mounting that climate change is beginning to cause diverse impacts in both environmental and socio-economic spheres of interest. For example, northeastern British Columbia (BC), whose climate is controlled by a combination of maritime, continental and arctic influences, is warming at a greater rate than the remainder of the province. There are indications that these changing conditions are already leading to shifting patterns in the region’s hydrological cycle, and thus its available water resources. Coincident with these changes, northeastern BC is undergoing rapid development for oil and gas extraction: This depends largely on subsurface hydraulic fracturing (‘fracking’), which uses enormous amounts of freshwater. While this industrial activity has made substantial contributions to regional and provincial economies, it is important to ensure that sufficient and sustainable water supplies are available for all those dependent on the resource, including ecological systems. In this turn demands a comprehensive understanding of how water in all its forms interacts with landscapes, the atmosphere, and of the potential impacts of changing climatic conditions on these processes. The aim of this study is therefore to characterize and quantify all components of the water budget in the small watershed of Coles Lake (141.8 km², 100 km north of Fort Nelson, BC), through a combination of field observations and numerical modelling. Baseline information will aid the assessment of the sustainability of current and future plans for freshwater extraction by the oil and gas industry, and will help to maintain the precarious balance between economic and environmental well-being. This project is a perfect example of interdisciplinary research, in that it not only examines the hydrology of the region but also investigates how natural gas operations and growth can affect water resources. Therefore, a fruitful collaboration between academia, government and industry has been established to fulfill the objectives of this research in a meaningful manner. This project aims to provide numerous benefits to BC communities. Further, the outcome and detailed information of this research can be a huge asset to researchers examining the effect of climate change on water resources worldwide.Keywords: northeastern British Columbia, water resources, climate change, oil and gas extraction
Procedia PDF Downloads 26412755 Assisted Prediction of Hypertension Based on Heart Rate Variability and Improved Residual Networks
Authors: Yong Zhao, Jian He, Cheng Zhang
Abstract:
Cardiovascular diseases caused by hypertension are extremely threatening to human health, and early diagnosis of hypertension can save a large number of lives. Traditional hypertension detection methods require special equipment and are difficult to detect continuous blood pressure changes. In this regard, this paper first analyzes the principle of heart rate variability (HRV) and introduces sliding window and power spectral density (PSD) to analyze the time domain features and frequency domain features of HRV, and secondly, designs an HRV-based hypertension prediction network by combining Resnet, attention mechanism, and multilayer perceptron, which extracts the frequency domain through the improved ResNet18 features through a modified ResNet18, its fusion with time-domain features through an attention mechanism, and the auxiliary prediction of hypertension through a multilayer perceptron. Finally, the network was trained and tested using the publicly available SHAREE dataset on PhysioNet, and the test results showed that this network achieved 92.06% prediction accuracy for hypertension and outperformed K Near Neighbor(KNN), Bayes, Logistic, and traditional Convolutional Neural Network(CNN) models in prediction performance.Keywords: feature extraction, heart rate variability, hypertension, residual networks
Procedia PDF Downloads 105