Search results for: information retrieval
10759 A Similarity Measure for Classification and Clustering in Image Based Medical and Text Based Banking Applications
Authors: K. P. Sandesh, M. H. Suman
Abstract:
Text processing plays an important role in information retrieval, data-mining, and web search. Measuring the similarity between the documents is an important operation in the text processing field. In this project, a new similarity measure is proposed. To compute the similarity between two documents with respect to a feature the proposed measure takes the following three cases into account: (1) The feature appears in both documents; (2) The feature appears in only one document and; (3) The feature appears in none of the documents. The proposed measure is extended to gauge the similarity between two sets of documents. The effectiveness of our measure is evaluated on several real-world data sets for text classification and clustering problems, especially in banking and health sectors. The results show that the performance obtained by the proposed measure is better than that achieved by the other measures.Keywords: document classification, document clustering, entropy, accuracy, classifiers, clustering algorithms
Procedia PDF Downloads 51810758 A Preliminary Study for Building an Arabic Corpus of Pair Questions-Texts from the Web: Aqa-Webcorp
Authors: Wided Bakari, Patrce Bellot, Mahmoud Neji
Abstract:
With the development of electronic media and the heterogeneity of Arabic data on the Web, the idea of building a clean corpus for certain applications of natural language processing, including machine translation, information retrieval, question answer, become more and more pressing. In this manuscript, we seek to create and develop our own corpus of pair’s questions-texts. This constitution then will provide a better base for our experimentation step. Thus, we try to model this constitution by a method for Arabic insofar as it recovers texts from the web that could prove to be answers to our factual questions. To do this, we had to develop a java script that can extract from a given query a list of html pages. Then clean these pages to the extent of having a database of texts and a corpus of pair’s question-texts. In addition, we give preliminary results of our proposal method. Some investigations for the construction of Arabic corpus are also presented in this document.Keywords: Arabic, web, corpus, search engine, URL, question, corpus building, script, Google, html, txt
Procedia PDF Downloads 32310757 A Case Report on Cognitive-Communication Intervention in Traumatic Brain Injury
Authors: Nikitha Francis, Anjana Hoode, Vinitha George, Jayashree S. Bhat
Abstract:
The interaction between cognition and language, referred as cognitive-communication, is very intricate, involving several mental processes such as perception, memory, attention, lexical retrieval, decision making, motor planning, self-monitoring and knowledge. Cognitive-communication disorders are difficulties in communicative competencies that result from underlying cognitive impairments of attention, memory, organization, information processing, problem solving, and executive functions. Traumatic brain injury (TBI) is an acquired, non - progressive condition, resulting in distinct deficits of cognitive communication abilities such as naming, word-finding, self-monitoring, auditory recognition, attention, perception and memory. Cognitive-communication intervention in TBI is individualized, in order to enhance the person’s ability to process and interpret information for better functioning in their family and community life. The present case report illustrates the cognitive-communicative behaviors and the intervention outcomes of an adult with TBI, who was brought to the Department of Audiology and Speech Language Pathology, with cognitive and communicative disturbances, consequent to road traffic accident. On a detailed assessment, she showed naming deficits along with perseverations and had severe difficulty in recalling the details of the accident, her house address, places she had visited earlier, names of people known to her, as well as the activities she did each day, leading to severe breakdowns in her communicative abilities. She had difficulty in initiating, maintaining and following a conversation. She also lacked orientation to time and place. On administration of the Manipal Manual of Cognitive Linguistic Abilities (MMCLA), she exhibited poor performance on tasks related to visual and auditory perception, short term memory, working memory and executive functions. She attended 20 sessions of cognitive-communication intervention which followed a domain-general, adaptive training paradigm, with tasks relevant to everyday cognitive-communication skills. Compensatory strategies such as maintaining a dairy with reminders of her daily routine, names of people, date, time and place was also recommended. MMCLA was re-administered and her performance in the tasks showed significant improvements. Occurrence of perseverations and word retrieval difficulties reduced. She developed interests to initiate her day-to-day activities at home independently, as well as involve herself in conversations with her family members. Though she lacked awareness about her deficits, she actively involved herself in all the therapy activities. Rehabilitation of moderate to severe head injury patients can be done effectively through a holistic cognitive retraining with a focus on different cognitive-linguistic domains. Selection of goals and activities should have relevance to the functional needs of each individual with TBI, as highlighted in the present case report.Keywords: cognitive-communication, executive functions, memory, traumatic brain injury
Procedia PDF Downloads 34610756 Management Information System to Help Managers for Providing Decision Making in an Organization
Authors: Ajayi Oluwasola Felix
Abstract:
Management information system (MIS) provides information for the managerial activities in an organization. The main purpose of this research is, MIS provides accurate and timely information necessary to facilitate the decision-making process and enable the organizations planning control and operational functions to be carried out effectively. Management information system (MIS) is basically concerned with processing data into information and is then communicated to the various departments in an organization for appropriate decision-making. MIS is a subset of the overall planning and control activities covering the application of humans technologies, and procedures of the organization. The information system is the mechanism to ensure that information is available to the managers in the form they want it and when they need it.Keywords: Management Information Systems (MIS), information technology, decision-making, MIS in Organizations
Procedia PDF Downloads 55510755 Investigating Naming and Connected Speech Impairments in Moroccan AD Patients
Authors: Mounia El Jaouhari, Mira Goral, Samir Diouny
Abstract:
Introduction: Previous research has indicated that language impairments are recognized as a feature of many neurodegenerative disorders, including non-language-led dementia subtypes such as Alzheimer´s disease (AD). In this preliminary study, the focal aim is to quantify the semantic content of naming and connected speech samples of Moroccan patients diagnosed with AD using two tasks taken from the culturally adapted and validated Moroccan version of the Boston Diagnostic Aphasia Examination. Methods: Five individuals with AD and five neurologically healthy individuals matched for age, gender, and education will participate in the study. Participants with AD will be diagnosed on the basis of the Moroccan version of the Diagnostic and Statistial Manual of Mental Disorders (DSM-4) screening test, the Moroccan version of the Mini Mental State Examination (MMSE) test scores, and neuroimaging analyses. The participants will engage in two tasks taken from the MDAE-SF: 1) Picture description and 2) Naming. Expected findings: Consistent with previous studies conducted on English speaking AD patients, we expect to find significant word production and retrieval impairments in AD patients in all measures. Moreover, we expect to find category fluency impairments that further endorse semantic breakdown accounts. In sum, not only will the findings of the current study shed more light on the locus of word retrieval impairments noted in AD, but also reflect the nature of Arabic morphology. In addition, the error patterns are expected to be similar to those found in previous AD studies in other languages.Keywords: alzheimer's disease, anomia, connected speech, semantic impairments, moroccan arabic
Procedia PDF Downloads 14210754 Quick Response Codes in Physio: A Simple Click to Long-Term Oxygen Therapy Education
Authors: K. W. Lee, C. M. Choi, H. C. Tsang, W. K. Fong, Y. K. Cheng, L. Y. Chan, C. K. Yuen, P. W. Lau, Y. L. To, K. C. Chow
Abstract:
QR (Quick Response) Code is a matrix barcode. It enables users to open websites, photos and other information with mobile devices by just snapping the code. In usual Long Term Oxygen Therapy arrangement, piles of LTOT related information like leaflets from different oxygen service providers are given to patients to choose an appropriate plan according to their needs. If these printed materials are transformed into electronic format (QR Code), it would be more environmentally-friendly. More importantly, electronic materials including LTOT equipment operation and dyspnoea relieving techniques also empower patients in long-term disease management. The objective to this study is to investigate the effect of QR code in patient education on new LTOT users. This study was carried out in medical wards of North District Hospital. Adult patients and relatives who followed commands, were able to use smartphones with internet services and required LTOT arrangement on hospital discharge were recruited. In LTOT arrangement, apart from the usual LTOT education booklets which included patients’ personal information (e.g. oxygen titration and six-minute walk test results etc.), extra leaflets consisted of 1. QR codes of LTOT plans from different oxygen service providers, 2. Education materials of dyspnoea management and 3. Instructions on LTOT equipment operation were given. Upon completion of LTOT arrangement, a questionnaire about the use of QR code on patient education was filled in by patients or relatives. A total of 10 new LTOT users were recruited from November 2017 to January 2018. Initially, 70% of them did not know anything about the QR code, but all of them understood its operation after a simple demonstration. 70% of them agreed that it was convenient to use (20% strongly agree, 40% agree, 10% somewhat agree). 80% of them agreed that QR code could facilitate the retrieval of more LTOT related information (10% strongly agree, 70% agree) while 90% agreed that we should continue delivering QR code leaflets to new LTOT users in the future (30% strongly agree, 40% agree, 20% somewhat agree). It is proven that QR code is a convenient and environmentally-friendly tool to deliver information. It is also relatively easy to be introduced to new users. It has received welcoming feedbacks from current users.Keywords: long-term oxygen therapy, physiotherapy, patient education, QR code
Procedia PDF Downloads 14710753 Chemical Reaction Algorithm for Expectation Maximization Clustering
Authors: Li Ni, Pen ManMan, Li KenLi
Abstract:
Clustering is an intensive research for some years because of its multifaceted applications, such as biology, information retrieval, medicine, business and so on. The expectation maximization (EM) is a kind of algorithm framework in clustering methods, one of the ten algorithms of machine learning. Traditionally, optimization of objective function has been the standard approach in EM. Hence, research has investigated the utility of evolutionary computing and related techniques in the regard. Chemical Reaction Optimization (CRO) is a recently established method. So the property embedded in CRO is used to solve optimization problems. This paper presents an algorithm framework (EM-CRO) with modified CRO operators based on EM cluster problems. The hybrid algorithm is mainly to solve the problem of initial value sensitivity of the objective function optimization clustering algorithm. Our experiments mainly take the EM classic algorithm:k-means and fuzzy k-means as an example, through the CRO algorithm to optimize its initial value, get K-means-CRO and FKM-CRO algorithm. The experimental results of them show that there is improved efficiency for solving objective function optimization clustering problems.Keywords: chemical reaction optimization, expection maimization, initia, objective function clustering
Procedia PDF Downloads 71010752 Improved Pitch Detection Using Fourier Approximation Method
Authors: Balachandra Kumaraswamy, P. G. Poonacha
Abstract:
Automatic Music Information Retrieval has been one of the challenging topics of research for a few decades now with several interesting approaches reported in the literature. In this paper we have developed a pitch extraction method based on a finite Fourier series approximation to the given window of samples. We then estimate pitch as the fundamental period of the finite Fourier series approximation to the given window of samples. This method uses analysis of the strength of harmonics present in the signal to reduce octave as well as harmonic errors. The performance of our method is compared with three best known methods for pitch extraction, namely, Yin, Windowed Special Normalization of the Auto-Correlation Function and Harmonic Product Spectrum methods of pitch extraction. Our study with artificially created signals as well as music files show that Fourier Approximation method gives much better estimate of pitch with less octave and harmonic errors.Keywords: pitch, fourier series, yin, normalization of the auto- correlation function, harmonic product, mean square error
Procedia PDF Downloads 41210751 A Phenomenological Approach to Computational Modeling of Analogy
Authors: José Eduardo García-Mendiola
Abstract:
In this work, a phenomenological approach to computational modeling of analogy processing is carried out. The paper goes through the consideration of the structure of the analogy, based on the possibility of sustaining the genesis of its elements regarding Husserl's genetic theory of association. Among particular processes which take place in order to get analogical inferences, there is one which arises crucial for enabling efficient base cases retrieval through long-term memory, namely analogical transference grounded on familiarity. In general, it has been argued that analogical reasoning is a way by which a conscious agent tries to determine or define a certain scope of objects and relationships between them using previous knowledge of other familiar domain of objects and relations. However, looking for a complete description of analogy process, a deeper consideration of phenomenological nature is required in so far, its simulation by computational programs is aimed. Also, one would get an idea of how complex it would be to have a fully computational account of the analogy elements. In fact, familiarity is not a result of a mere chain of repetitions of objects or events but generated insofar as the object/attribute or event in question is integrable inside a certain context that is taking shape as functionalities and functional approaches or perspectives of the object are being defined. Its familiarity is generated not by the identification of its parts or objective determinations as if they were isolated from those functionalities and approaches. Rather, at the core of such a familiarity between entities of different kinds lays the way they are functionally encoded. So, and hoping to make deeper inroads towards these topics, this essay allows us to consider that cognitive-computational perspectives can visualize, from the phenomenological projection of the analogy process reviewing achievements already obtained as well as exploration of new theoretical-experimental configurations towards implementation of analogy models in specific as well as in general purpose machines.Keywords: analogy, association, encoding, retrieval
Procedia PDF Downloads 12110750 Mondoc: Informal Lightweight Ontology for Faceted Semantic Classification of Hypernymy
Authors: M. Regina Carreira-Lopez
Abstract:
Lightweight ontologies seek to concrete union relationships between a parent node, and a secondary node, also called "child node". This logic relation (L) can be formally defined as a triple ontological relation (LO) equivalent to LO in ⟨LN, LE, LC⟩, and where LN represents a finite set of nodes (N); LE is a set of entities (E), each of which represents a relationship between nodes to form a rooted tree of ⟨LN, LE⟩; and LC is a finite set of concepts (C), encoded in a formal language (FL). Mondoc enables more refined searches on semantic and classified facets for retrieving specialized knowledge about Atlantic migrations, from the Declaration of Independence of the United States of America (1776) and to the end of the Spanish Civil War (1939). The model looks forward to increasing documentary relevance by applying an inverse frequency of co-ocurrent hypernymy phenomena for a concrete dataset of textual corpora, with RMySQL package. Mondoc profiles archival utilities implementing SQL programming code, and allows data export to XML schemas, for achieving semantic and faceted analysis of speech by analyzing keywords in context (KWIC). The methodology applies random and unrestricted sampling techniques with RMySQL to verify the resonance phenomena of inverse documentary relevance between the number of co-occurrences of the same term (t) in more than two documents of a set of texts (D). Secondly, the research also evidences co-associations between (t) and their corresponding synonyms and antonyms (synsets) are also inverse. The results from grouping facets or polysemic words with synsets in more than two textual corpora within their syntagmatic context (nouns, verbs, adjectives, etc.) state how to proceed with semantic indexing of hypernymy phenomena for subject-heading lists and for authority lists for documentary and archival purposes. Mondoc contributes to the development of web directories and seems to achieve a proper and more selective search of e-documents (classification ontology). It can also foster on-line catalogs production for semantic authorities, or concepts, through XML schemas, because its applications could be used for implementing data models, by a prior adaptation of the based-ontology to structured meta-languages, such as OWL, RDF (descriptive ontology). Mondoc serves to the classification of concepts and applies a semantic indexing approach of facets. It enables information retrieval, as well as quantitative and qualitative data interpretation. The model reproduces a triple tuple ⟨LN, LE, LT, LCF L, BKF⟩ where LN is a set of entities that connect with other nodes to concrete a rooted tree in ⟨LN, LE⟩. LT specifies a set of terms, and LCF acts as a finite set of concepts, encoded in a formal language, L. Mondoc only resolves partial problems of linguistic ambiguity (in case of synonymy and antonymy), but neither the pragmatic dimension of natural language nor the cognitive perspective is addressed. To achieve this goal, forthcoming programming developments should target at oriented meta-languages with structured documents in XML.Keywords: hypernymy, information retrieval, lightweight ontology, resonance
Procedia PDF Downloads 12410749 Building Knowledge Society: The Imperative Role of Library and Information Centres (LICs) in Developing Countries
Authors: Desmond Chinedu Oparaku, Oyemike Victor Benson, Ifeyinwa A. Ariole
Abstract:
A critical examination of the emerging knowledge society reveals that library and information centres have a significant role to play in the building of knowledge society. The major highlights of this paper include: the conceptual analysis of knowledge society, overview of library and information centres in developing countries, role of libraries and information centre in building up of knowledge society, library and information professionals as factor in building knowledge, challenges faced by Library and Information Centres (LICs) in building knowledge society, strategies for building knowledge society. The position of this paper is that in spite of the influx of varied information and communication technologies in the information industry which is the driving force of knowledge society, there is a dire need for Libraries and Information Centres (LIC) to contribute positively to the migration and transition processes from the information society to knowledge-based society.Keywords: information and communication technology (ICT), information centres, information industry, information society
Procedia PDF Downloads 37710748 Advantages of Multispectral Imaging for Accurate Gas Temperature Profile Retrieval from Fire Combustion Reactions
Authors: Jean-Philippe Gagnon, Benjamin Saute, Stéphane Boubanga-Tombet
Abstract:
Infrared thermal imaging is used for a wide range of applications, especially in the combustion domain. However, it is well known that most combustion gases such as carbon dioxide (CO₂), water vapor (H₂O), and carbon monoxide (CO) selectively absorb/emit infrared radiation at discrete energies, i.e., over a very narrow spectral range. Therefore, temperature profiles of most combustion processes derived from conventional broadband imaging are inaccurate without prior knowledge or assumptions about the spectral emissivity properties of the combustion gases. Using spectral filters allows estimating these critical emissivity parameters in addition to providing selectivity regarding the chemical nature of the combustion gases. However, due to the turbulent nature of most flames, it is crucial that such information be obtained without sacrificing temporal resolution. For this reason, Telops has developed a time-resolved multispectral imaging system which combines a high-performance broadband camera synchronized with a rotating spectral filter wheel. In order to illustrate the benefits of using this system to characterize combustion experiments, measurements were carried out using a Telops MS-IR MW on a very simple combustion system: a wood fire. The temperature profiles calculated using the spectral information from the different channels were compared with corresponding temperature profiles obtained with conventional broadband imaging. The results illustrate the benefits of the Telops MS-IR cameras for the characterization of laminar and turbulent combustion systems at a high temporal resolution.Keywords: infrared, multispectral, fire, broadband, gas temperature, IR camera
Procedia PDF Downloads 14210747 A Forward-Looking View of the Intellectual Capital Accounting Information System
Authors: Rbiha Salsabil Ketitni
Abstract:
The entire company is a series of information among themselves so that each information serves several events and activities, and the latter is nothing but a large set of data or huge data. The enormity of information leads to the possibility of losing it sometimes, and this possibility must be avoided in the institution, especially the information that has a significant impact on it. In most cases, to avoid the loss of this information and to be relatively correct, information systems are used. At present, it is impossible to have a company that does not have information systems, as the latter works to organize the information as well as to preserve it and even saves time for its owner and this is the result of the speed of its mission. This study aims to provide an idea of an accounting information system that opens a forward-looking study for its manufacture and development by researchers, scientists, and professionals. This is the result of most individuals seeing a great contradiction between the work of an information system for moral capital and does not provide real values when measured, and its disclosure in financial reports is not distinguished by transparency.Keywords: accounting, intellectual capital, intellectual capital accounting, information system
Procedia PDF Downloads 8110746 Linguistic Insights Improve Semantic Technology in Medical Research and Patient Self-Management Contexts
Authors: William Michael Short
Abstract:
Semantic Web’ technologies such as the Unified Medical Language System Metathesaurus, SNOMED-CT, and MeSH have been touted as transformational for the way users access online medical and health information, enabling both the automated analysis of natural-language data and the integration of heterogeneous healthrelated resources distributed across the Internet through the use of standardized terminologies that capture concepts and relationships between concepts that are expressed differently across datasets. However, the approaches that have so far characterized ‘semantic bioinformatics’ have not yet fulfilled the promise of the Semantic Web for medical and health information retrieval applications. This paper argues within the perspective of cognitive linguistics and cognitive anthropology that four features of human meaning-making must be taken into account before the potential of semantic technologies can be realized for this domain. First, many semantic technologies operate exclusively at the level of the word. However, texts convey meanings in ways beyond lexical semantics. For example, transitivity patterns (distributions of active or passive voice) and modality patterns (configurations of modal constituents like may, might, could, would, should) convey experiential and epistemic meanings that are not captured by single words. Language users also naturally associate stretches of text with discrete meanings, so that whole sentences can be ascribed senses similar to the senses of words (so-called ‘discourse topics’). Second, natural language processing systems tend to operate according to the principle of ‘one token, one tag’. For instance, occurrences of the word sound must be disambiguated for part of speech: in context, is sound a noun or a verb or an adjective? In syntactic analysis, deterministic annotation methods may be acceptable. But because natural language utterances are typically characterized by polyvalency and ambiguities of all kinds (including intentional ambiguities), such methods leave the meanings of texts highly impoverished. Third, ontologies tend to be disconnected from everyday language use and so struggle in cases where single concepts are captured through complex lexicalizations that involve profile shifts or other embodied representations. More problematically, concept graphs tend to capture ‘expert’ technical models rather than ‘folk’ models of knowledge and so may not match users’ common-sense intuitions about the organization of concepts in prototypical structures rather than Aristotelian categories. Fourth, and finally, most ontologies do not recognize the pervasively figurative character of human language. However, since the time of Galen the widespread use of metaphor in the linguistic usage of both medical professionals and lay persons has been recognized. In particular, metaphor is a well-documented linguistic tool for communicating experiences of pain. Because semantic medical knowledge-bases are designed to help capture variations within technical vocabularies – rather than the kinds of conventionalized figurative semantics that practitioners as well as patients actually utilize in clinical description and diagnosis – they fail to capture this dimension of linguistic usage. The failure of semantic technologies in these respects degrades the efficiency and efficacy not only of medical research, where information retrieval inefficiencies can lead to direct financial costs to organizations, but also of care provision, especially in contexts of patients’ self-management of complex medical conditions.Keywords: ambiguity, bioinformatics, language, meaning, metaphor, ontology, semantic web, semantics
Procedia PDF Downloads 13210745 Students’ Willingness to Use Public Computing Facilities at a Library
Authors: Norbayah Mohd Suki, Norazah Mohd Suki
Abstract:
This study aims to examine relationships between attitude, self-efficacy, and subjective norm with students’ behavioural intention to use public computing facilities at a library. Data was collected from 200 undergraduate students enrolled at a higher learning institution in the Federal Territory of Labuan, Malaysia via a structured questionnaire comprising closed-ended questions. Data was analyzed using multiple regression analysis. The results show that students’ behavioural intention to use public computing facilities at the library is widely affected by subjective norm factor i.e. influence of the support of family members, friends and neighbours. The findings of this study provide a better understanding of factors likely to influence students’ behavioural intention to use public computing facilities at a library. It also offers valuable insights into factors which university librarians need to focus on to improve students’ behavioural intention to actively use public computing facilities at a library for quality information retrieval. Direction for future research is also presented.Keywords: attitude, self-efficacy, subjective norm, behavioural intention
Procedia PDF Downloads 44610744 Development of a Web-Based Application for Intelligent Fertilizer Management in Rice Cultivation
Authors: Hao-Wei Fu, Chung-Feng Kao
Abstract:
In the era of rapid technological advancement, information technology (IT) has become integral to modern life, exerting significant influence across diverse sectors and serving as a catalyst for development in various industries. Within agriculture, the integration of IT offers substantial benefits, notably enhancing operational efficiency. Real-time monitoring systems, for instance, have been widely embraced in agriculture, effectively improving crop management practices. This study specifically addresses the management of rice panicle fertilizer, presenting the development of a web application tailored to handle data associated with rice panicle fertilizer management. Leveraging the normalized difference red edge index, this application optimizes the quantity of rice panicle fertilizer used, providing recommendations to agricultural stakeholders and service providers in the agricultural information sector. The overarching objective is to minimize costs while maximizing yields. Furthermore, a robust database system has been established to store and manage relevant data for future reference in rice cultivation management. Additionally, the study utilizes the Representational State Transfer software architectural style to construct an application programming interface (API), facilitating data creation, retrieval, updating, and deletion for users via the HyperText Transfer Protocol methods. Future plans involve integrating this API with third-party services to incorporate it into larger frameworks, thus catering to the diverse requirements of various third-party services.Keywords: application programming interface, HyperText Transfer Protocol, nitrogen fertilizer intelligent management, web-based application
Procedia PDF Downloads 5910743 Information Needs and Information Usage of the Older Person Club’s Members in Bangkok
Authors: Siriporn Poolsuwan
Abstract:
This research aims to explore the information needs, information usages, and problems of information usage of the older people club’s members in Dusit District, Bangkok. There are 12 clubs and 746 club’s members in this district. The research results use for older person service in this district. Data is gathered from 252 club’s members by using questionnaires. The quantitative approach uses in research by percentage, means and standard deviation. The results are as follows (1) The older people need Information for entertainment, occupation and academic in the field of short story, computer work, and religion and morality. (2) The participants use Information from various sources. (3) The Problem of information usage is their language skills because of the older people’s literacy problem.Keywords: information behavior, older person, information seeking, knowledge discovery and data mining
Procedia PDF Downloads 26910742 Key Frame Based Video Summarization via Dependency Optimization
Authors: Janya Sainui
Abstract:
As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches.Keywords: video summarization, key frame extraction, dependency measure, quadratic mutual information
Procedia PDF Downloads 26510741 Optimizing the Public Policy Information System under the Environment of E-Government
Authors: Qian Zaijian
Abstract:
E-government is one of the hot issues in the current academic research of public policy and management. As the organic integration of information and communication technology (ICT) and public administration, e-government is one of the most important areas in contemporary information society. Policy information system is a basic subsystem of public policy system, its operation affects the overall effect of the policy process or even exerts a direct impact on the operation of a public policy and its success or failure. The basic principle of its operation is information collection, processing, analysis and release for a specific purpose. The function of E-government for public policy information system lies in the promotion of public access to the policy information resources, information transmission through e-participation, e-consultation in the process of policy analysis and processing of information and electronic services in policy information stored, to promote the optimization of policy information systems. However, due to many factors, the function of e-government to promote policy information system optimization has its practical limits. In the building of E-government in our country, we should take such path as adhering to the principle of freedom of information, eliminating the information divide (gap), expanding e-consultation, breaking down information silos and other major path, so as to promote the optimization of public policy information systems.Keywords: China, e-consultation, e-democracy, e-government, e-participation, ICTs, public policy information systems
Procedia PDF Downloads 86210740 Improving the Statistics Nature in Research Information System
Authors: Rajbir Cheema
Abstract:
In order to introduce an integrated research information system, this will provide scientific institutions with the necessary information on research activities and research results in assured quality. Since data collection, duplication, missing values, incorrect formatting, inconsistencies, etc. can arise in the collection of research data in different research information systems, which can have a wide range of negative effects on data quality, the subject of data quality should be treated with better results. This paper examines the data quality problems in research information systems and presents the new techniques that enable organizations to improve their quality of research information.Keywords: Research information systems (RIS), research information, heterogeneous sources, data quality, data cleansing, science system, standardization
Procedia PDF Downloads 15510739 Impact of Similarity Ratings on Human Judgement
Authors: Ian A. McCulloh, Madelaine Zinser, Jesse Patsolic, Michael Ramos
Abstract:
Recommender systems are a common artificial intelligence (AI) application. For any given input, a search system will return a rank-ordered list of similar items. As users review returned items, they must decide when to halt the search and either revise search terms or conclude their requirement is novel with no similar items in the database. We present a statistically designed experiment that investigates the impact of similarity ratings on human judgement to conclude a search item is novel and halt the search. 450 participants were recruited from Amazon Mechanical Turk to render judgement across 12 decision tasks. We find the inclusion of ratings increases the human perception that items are novel. Percent similarity increases novelty discernment when compared with star-rated similarity or the absence of a rating. Ratings reduce the time to decide and improve decision confidence. This suggests the inclusion of similarity ratings can aid human decision-makers in knowledge search tasks.Keywords: ratings, rankings, crowdsourcing, empirical studies, user studies, similarity measures, human-centered computing, novelty in information retrieval
Procedia PDF Downloads 13010738 Coronavirus Academic Paper Sorting Application
Authors: Christina A. van Hal, Xiaoqian Jiang, Luyao Chen, Yan Chu, Robert D. Jolly, Yaobin Lin, Jitian Zhao, Kang Lin Hsieh
Abstract:
The COVID-19 Literature Summary App was created for the primary purpose of enabling academicians and clinicians to quickly sort through the vast array of recent coronavirus publications by topics of interest. Multiple methods of summarizing and sorting the manuscripts were created. A summary page introduces the application function and capabilities, while an interactive map provides daily updates on infection, death, and recovery rates. A page with a pivot table allows publication sorting by topic, with an interactive data table that allows sorting topics by columns, as wells as the capability to view abstracts. Additionally, publications may be sorted by the medical topics they cover. We used the CORD-19 database to compile lists of publications. The data table can sort binary variables, allowing the user to pick desired publication topics, such as papers that describe COVID-19 symptoms. The application is primarily designed for use by researchers but can be used by anybody who wants a faster and more efficient means of locating papers of interest.Keywords: COVID-19, literature summary, information retrieval, Snorkel
Procedia PDF Downloads 15110737 Band Characterization and Development of Hyperspectral Indices for Retrieving Chlorophyll Content
Authors: Ramandeep Kaur M. Malhi, Prashant K. Srivastava, G.Sandhya Kiran
Abstract:
Quantitative estimates of foliar biochemicals, namely chlorophyll content (CC), serve as key information for the assessment of plant productivity, stress, and the availability of nutrients. This also plays a critical role in predicting the dynamic response of any vegetation to altering climate conditions. The advent of hyperspectral data with an enhanced number of available wavelengths has increased the possibility of acquiring improved information on CC. Retrieval of CC is extensively carried through well known spectral indices derived from hyperspectral data. In the present study, an attempt is made to develop hyperspectral indices by identifying optimum bands for CC estimation in Butea monosperma (Lam.) Taub growing in forests of Shoolpaneshwar Wildlife Sanctuary, Narmada district, Gujarat State, India. 196 narrow bands of EO-1 Hyperion images were screened, and the best optimum wavelength from blue, green, red, and near infrared (NIR) regions were identified based on the coefficient of determination (R²) between band reflectance and laboratory estimated CC. The identified optimum wavelengths were then employed for developing 12 hyperspectral indices. These spectral index values and CC values were then correlated to investigate the relation between laboratory measured CC and spectral indices. Band 15 of blue range and Band 22 of green range, Band 40 of the red region, and Band 79 of NIR region were found to be optimum bands for estimating CC. The optimum band based combinations on hyperspectral data proved to be the most effective indices for quantifying Butea CC with NDVI and TVI identified as the best (R² > 0.7, p < 0.01). The study demonstrated the significance of band characterization in the development of the best hyperspectral indices for the chlorophyll estimation, which can aid in monitoring the vitality of forests.Keywords: band, characterization, chlorophyll, hyperspectral, indices
Procedia PDF Downloads 15310736 Global-Scale Evaluation of Two Satellite-Based Passive Microwave Soil Moisture Data Sets (SMOS and AMSR-E) with Respect to Modelled Estimates
Authors: A. Alyaaria, b, J. P. Wignerona, A. Ducharneb, Y. Kerrc, P. de Rosnayd, R. de Jeue, A. Govinda, A. Al Bitarc, C. Albergeld, J. Sabaterd, C. Moisya, P. Richaumec, A. Mialonc
Abstract:
Global Level-3 surface soil moisture (SSM) maps from the passive microwave soil moisture and Ocean Salinity satellite (SMOSL3) have been released. To further improve the Level-3 retrieval algorithm, evaluation of the accuracy of the spatio-temporal variability of the SMOS Level 3 products (referred to here as SMOSL3) is necessary. In this study, a comparative analysis of SMOSL3 with a SSM product derived from the observations of the Advanced Microwave Scanning Radiometer (AMSR-E) computed by implementing the Land Parameter Retrieval Model (LPRM) algorithm, referred to here as AMSRM, is presented. The comparison of both products (SMSL3 and AMSRM) were made against SSM products produced by a numerical weather prediction system (SM-DAS-2) at ECMWF (European Centre for Medium-Range Weather Forecasts) for the 03/2010-09/2011 period at global scale. The latter product was considered here a 'reference' product for the inter-comparison of the SMOSL3 and AMSRM products. Three statistical criteria were used for the evaluation, the correlation coefficient (R), the root-mean-squared difference (RMSD), and the bias. Global maps of these criteria were computed, taking into account vegetation information in terms of biome types and Leaf Area Index (LAI). We found that both the SMOSL3 and AMSRM products captured well the spatio-temporal variability of the SM-DAS-2 SSM products in most of the biomes. In general, the AMSRM products overestimated (i.e., wet bias) while the SMOSL3 products underestimated (i.e., dry bias) SSM in comparison to the SM-DAS-2 SSM products. In term of correlation values, the SMOSL3 products were found to better capture the SSM temporal dynamics in highly vegetated biomes ('Tropical humid', 'Temperate Humid', etc.) while best results for AMSRM were obtained over arid and semi-arid biomes ('Desert temperate', 'Desert tropical', etc.). When removing the seasonal cycles in the SSM time variations to compute anomaly values, better correlation with the SM-DAS-2 SSM anomalies were obtained with SMOSL3 than with AMSRM, in most of the biomes with the exception of desert regions. Eventually, we showed that the accuracy of the remotely sensed SSM products is strongly related to LAI. Both the SMOSL3 and AMSRM (slightly better) SSM products correlate well with the SM-DAS2 products over regions with sparse vegetation for values of LAI < 1 (these regions represent almost 50% of the pixels considered in this global study). In regions where LAI>1, SMOSL3 outperformed AMSRM with respect to SM-DAS-2: SMOSL3 had almost consistent performances up to LAI = 6, whereas AMSRM performance deteriorated rapidly with increasing values of LAI.Keywords: remote sensing, microwave, soil moisture, AMSR-E, SMOS
Procedia PDF Downloads 35610735 Regularizing Software for Aerosol Particles
Authors: Christine Böckmann, Julia Rosemann
Abstract:
We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization
Procedia PDF Downloads 34210734 Towards A New Maturity Model for Information System
Authors: Ossama Matrane
Abstract:
Information System has become a strategic lever for enterprises. It contributes effectively to align business processes on strategies of enterprises. It is regarded as an increase in productivity and effectiveness. So, many organizations are currently involved in implementing sustainable Information System. And, a large number of studies have been conducted the last decade in order to define the success factors of information system. Thus, many studies on maturity model have been carried out. Some of this study is referred to the maturity model of Information System. In this article, we report on development of maturity models specifically designed for information system. This model is built based on three components derived from Maturity Model for Information Security Management, OPM3 for Project Management Maturity Model and processes of COBIT for IT governance. Thus, our proposed model defines three maturity stages for corporate a strong Information System to support objectives of organizations. It provides a very practical structure with which to assess and improve Information System Implementation.Keywords: information system, maturity models, information security management, OPM3, IT governance
Procedia PDF Downloads 44610733 Morphological Processing of Punjabi Text for Sentiment Analysis of Farmer Suicides
Authors: Jaspreet Singh, Gurvinder Singh, Prabhsimran Singh, Rajinder Singh, Prithvipal Singh, Karanjeet Singh Kahlon, Ravinder Singh Sawhney
Abstract:
Morphological evaluation of Indian languages is one of the burgeoning fields in the area of Natural Language Processing (NLP). The evaluation of a language is an eminent task in the era of information retrieval and text mining. The extraction and classification of knowledge from text can be exploited for sentiment analysis and morphological evaluation. This study coalesce morphological evaluation and sentiment analysis for the task of classification of farmer suicide cases reported in Punjab state of India. The pre-processing of Punjabi text involves morphological evaluation and normalization of Punjabi word tokens followed by the training of proposed model using deep learning classification on Punjabi language text extracted from online Punjabi news reports. The class-wise accuracies of sentiment prediction for four negatively oriented classes of farmer suicide cases are 93.85%, 88.53%, 83.3%, and 95.45% respectively. The overall accuracy of sentiment classification obtained using proposed framework on 275 Punjabi text documents is found to be 90.29%.Keywords: deep neural network, farmer suicides, morphological processing, punjabi text, sentiment analysis
Procedia PDF Downloads 32510732 A Physical Theory of Information vs. a Mathematical Theory of Communication
Authors: Manouchehr Amiri
Abstract:
This article introduces a general notion of physical bit information that is compatible with the basics of quantum mechanics and incorporates the Shannon entropy as a special case. This notion of physical information leads to the Binary data matrix model (BDM), which predicts the basic results of quantum mechanics, general relativity, and black hole thermodynamics. The compatibility of the model with holographic, information conservation, and Landauer’s principles are investigated. After deriving the “Bit Information principle” as a consequence of BDM, the fundamental equations of Planck, De Broglie, Beckenstein, and mass-energy equivalence are derived.Keywords: physical theory of information, binary data matrix model, Shannon information theory, bit information principle
Procedia PDF Downloads 17010731 The Effect of Supply Chain Integration on Information Sharing
Authors: Khlif Hamadi
Abstract:
Supply chain integration has become a potentially valuable way of securing shared information and improving supply chain performance since competition is no longer between organizations but among supply chains. This research conceptualizes and develops three dimensions of supply chain integration (integration with customers, integration with suppliers, and the interorganizational integration) and tests the relationships between supply chain integration, information sharing, and supply chain performance. Furthermore, the four types of information sharing namely; information sharing with customers, information sharing with suppliers, inter-functional information sharing, and intra-organizational information sharing; and the four constructs of Supply Chain Performance represents expenses of costs, asset utilization, supply chain reliability, and supply chain flexibility and responsiveness. The theoretical and practical implications of the study, as well as directions for future research, are discussed.Keywords: supply chain integration, supply chain management, information sharing, supply chain performance
Procedia PDF Downloads 25910730 Gastric Foreign Bodies in Dogs
Authors: Naglaa A. Abd Elkader, Haithem A. Farghali
Abstract:
The present study carried out on fifteen clinical cases of different species of dogs which admitted to surgical clinic of veterinary medicine with different symptoms (Acute vomiting, hematemesis and anorexia). There was diagnostic march which including plain radiograph and endoscopic examination. Treatment was including surgical interference and endoscopic retrieval followed by medicinal treatment. This study was aimed the detection of different foreign bodies by the most suitable method according to the type of the foreign bodies.Keywords: stomach, endoscopy, foreign bodies, dogs
Procedia PDF Downloads 415