Search results for: incidental information processing
13264 Biofilm Text Classifiers Developed Using Natural Language Processing and Unsupervised Learning Approach
Authors: Kanika Gupta, Ashok Kumar
Abstract:
Biofilms are dense, highly hydrated cell clusters that are irreversibly attached to a substratum, to an interface or to each other, and are embedded in a self-produced gelatinous matrix composed of extracellular polymeric substances. Research in biofilm field has become very significant, as biofilm has shown high mechanical resilience and resistance to antibiotic treatment and constituted as a significant problem in both healthcare and other industry related to microorganisms. The massive information both stated and hidden in the biofilm literature are growing exponentially therefore it is not possible for researchers and practitioners to automatically extract and relate information from different written resources. So, the current work proposes and discusses the use of text mining techniques for the extraction of information from biofilm literature corpora containing 34306 documents. It is very difficult and expensive to obtain annotated material for biomedical literature as the literature is unstructured i.e. free-text. Therefore, we considered unsupervised approach, where no annotated training is necessary and using this approach we developed a system that will classify the text on the basis of growth and development, drug effects, radiation effects, classification and physiology of biofilms. For this, a two-step structure was used where the first step is to extract keywords from the biofilm literature using a metathesaurus and standard natural language processing tools like Rapid Miner_v5.3 and the second step is to discover relations between the genes extracted from the whole set of biofilm literature using pubmed.mineR_v1.0.11. We used unsupervised approach, which is the machine learning task of inferring a function to describe hidden structure from 'unlabeled' data, in the above-extracted datasets to develop classifiers using WinPython-64 bit_v3.5.4.0Qt5 and R studio_v0.99.467 packages which will automatically classify the text by using the mentioned sets. The developed classifiers were tested on a large data set of biofilm literature which showed that the unsupervised approach proposed is promising as well as suited for a semi-automatic labeling of the extracted relations. The entire information was stored in the relational database which was hosted locally on the server. The generated biofilm vocabulary and genes relations will be significant for researchers dealing with biofilm research, making their search easy and efficient as the keywords and genes could be directly mapped with the documents used for database development.Keywords: biofilms literature, classifiers development, text mining, unsupervised learning approach, unstructured data, relational database
Procedia PDF Downloads 17013263 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing
Authors: Rowan P. Martnishn
Abstract:
During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding
Procedia PDF Downloads 2913262 Increasing a Computer Performance by Overclocking Central Processing Unit (CPU)
Authors: Witthaya Mekhum, Wutthikorn Malikong
Abstract:
The objective of this study is to investigate the increasing desktop computer performance after overclocking central processing unit or CPU by running a computer component at a higher clock rate (more clock cycles per second) than it was designed at the rate of 0.1 GHz for each level or 100 MHz starting at 4000 GHz-4500 GHz. The computer performance is tested for each level with 4 programs, i.e. Hyper PI ver. 0.99b, Cinebench R15, LinX ver.0.6.4 and WinRAR . After the CPU overclock, the computer performance increased. When overclocking CPU at 29% the computer performance tested by Hyper PI ver. 0.99b increased by 10.03% and when tested by Cinebench R15 the performance increased by 20.05% and when tested by LinX Program the performance increased by 16.61%. However, the performance increased only 8.14% when tested with Winrar program. The computer performance did not increase according to the overclock rate because the computer consists of many components such as Random Access Memory or RAM, Hard disk Drive, Motherboard and Display Card, etc.Keywords: overclock, performance, central processing unit, computer
Procedia PDF Downloads 28313261 The Role of Online Videos in Undergraduate Casual-Leisure Information Behaviors
Authors: Nei-Ching Yeh
Abstract:
This study describes undergraduate casual-leisure information behaviors relevant to online videos. Diaries and in-depth interviews were used to collect data. Twenty-four undergraduates participated in this study (9 men, 15 women; all were aged 18–22 years). This study presents a model of casual-leisure information behaviors and contributes new insights into user experience in casual-leisure settings, such as online video programs, with implications for other information domains.Keywords: casual-leisure information behaviors, information behavior, online videos, role
Procedia PDF Downloads 30913260 Development of a Vacuum System for Orthopedic Drilling Processes and Determination of Optimal Processing Parameters for Temperature Control
Authors: Kadir Gök
Abstract:
In this study, a vacuum system was developed for orthopedic drilling processes, and the most efficient processing parameters were determined using statistical analysis of temperature rise. A reverse engineering technique was used to obtain a 3D model of the chip vacuum system, and the obtained point cloud data was transferred to Solidworks software in STL format. An experimental design method was performed by selecting different parameters and their levels, such as RPM, feed rate, and drill bit diameter, to determine the most efficient processing parameters in temperature rise using ANOVA. Additionally, the bone chip-vacuum device was developed and performed successfully to collect the whole chips and fragments in the bone drilling experimental tests, and the chip-collecting device was found to be useful in removing overheating from the drilling zone. The effects of processing parameters on the temperature levels during the chip-vacuuming were determined, and it was found that bone chips and fractures can be used as autograft and allograft for tissue engineering. Overall, this study provides significant insights into the development of a vacuum system for orthopedic drilling processes and the use of bone chips and fractures in tissue engineering applications.Keywords: vacuum system, orthopedic drilling, temperature rise, bone chips
Procedia PDF Downloads 9813259 Netnography Research in Leisure, Tourism, and Hospitality: Lessons from Research and Education
Authors: Marisa P. De Brito
Abstract:
The internet is affecting the way the industry operates and communicates. It is also becoming a customary means for leisure, tourism, and hospitality consumers to seek and exchange information and views on hotels, destinations events and attractions, or to develop social ties with other users. On the one hand, the internet is a rich field to conduct leisure, tourism, and hospitality research; on the other hand, however, there are few researchers formally embracing online methods of research, such as netnography. Within social sciences, netnography falls under the interpretative/ethnographic research methods umbrella. It is an adaptation of anthropological techniques such as participant and non-participant observation, used to study online interactions happening on social media platforms, such as Facebook. It is, therefore, a research method applied to the study of online communities, being the term itself a contraction of the words network (as on internet), and ethnography. It was developed in the context of marketing research in the nineties, and in the last twenty years, it has spread to other contexts such as education, psychology, or urban studies. Since netnography is not universally known, it may discourage researchers and educators from using it. This work offers guidelines for researchers wanting to apply this method in the field of leisure, tourism, and hospitality or for educators wanting to teach about it. This is done by means of a double approach: a content analysis of the literature side-by-side with educational data, on the use of netnography. The content analysis is of the incidental research using netnography in leisure, tourism, and hospitality in the last twenty years. The educational data is the author and her colleagues’ experience in coaching students throughout the process of writing a paper using primary netnographic data - from identifying the phenomenon to be studied, selecting an online community, collecting and analyzing data to writing their findings. In the end, this work puts forward, on the one hand, a research agenda, and on the other hand, an educational roadmap for those wanting to apply netnography in the field or the classroom. The educator’s roadmap will summarise what can be expected from mini-netnographies conducted by students and how to set it up. The research agenda will highlight for which issues and research questions the method is most suitable; what are the most common bottlenecks and drawbacks of the method and of its application, but also where most knowledge opportunities lay.Keywords: netnography, online research, research agenda, educator's roadmap
Procedia PDF Downloads 18413258 Interoperable Design Coordination Method for Sharing Communication Information Using Building Information Model Collaboration Format
Authors: Jin Gang Lee, Hyun-Soo Lee, Moonseo Park
Abstract:
The utilization of BIM and IFC allows project participants to collaborate across different areas by consistently sharing interoperable product information represented in a model. Comments or markups generated during the coordination process can be categorized as communication information, which can be shared in less standardized manner. It can be difficult to manage and reuse such information compared to the product information in a model. The present study proposes an interoperable coordination method using BCF (the BIM Collaboration Format) for managing and sharing the communication information during BIM based coordination process. A management function for coordination in the BIM collaboration system is developed to assess its ability to share the communication information in BIM collaboration projects. This approach systematically links communication information during the coordination process to the building model and serves as a type of storage system for retrieving knowledge created during BIM collaboration projects.Keywords: design coordination, building information model, BIM collaboration format, industry foundation classes
Procedia PDF Downloads 43313257 Enhancing Food Quality and Safety Management in Ethiopia's Food Processing Industry: Challenges, Causes, and Solutions
Authors: Tuji Jemal Ahmed
Abstract:
Food quality and safety challenges are prevalent in Ethiopia's food processing industry, which can have adverse effects on consumers' health and wellbeing. The country is known for its diverse range of agricultural products, which are essential to its economy. However, poor food quality and safety policies and management systems in the food processing industry have led to several health problems, foodborne illnesses, and economic losses. This paper aims to highlight the causes and effects of food safety and quality issues in the food processing industry of Ethiopia and discuss potential solutions to address these issues. One of the main causes of poor food quality and safety in Ethiopia's food processing industry is the lack of adequate regulations and enforcement mechanisms. The absence of comprehensive food safety and quality policies and guidelines has led to substandard practices in the food manufacturing process. Moreover, the lack of monitoring and enforcement of existing regulations has created a conducive environment for unscrupulous businesses to engage in unsafe practices that endanger the public's health. The effects of poor food quality and safety are significant, ranging from the loss of human lives, increased healthcare costs, and loss of consumer confidence in the food processing industry. Foodborne illnesses, such as diarrhea, typhoid fever, and cholera, are prevalent in Ethiopia, and poor food quality and safety practices contribute significantly to their prevalence. Additionally, food recalls due to contamination or mislabeling often result in significant economic losses for businesses in the food processing industry. To address these challenges, the Ethiopian government has begun to take steps to improve food quality and safety in the food processing industry. One of the most notable initiatives is the Ethiopian Food and Drug Administration (EFDA), which was established in 2010 to regulate and monitor the quality and safety of food and drug products in the country. The EFDA has implemented several measures to enhance food safety, such as conducting routine inspections, monitoring the importation of food products, and enforcing strict labeling requirements. Another potential solution to improve food quality and safety in Ethiopia's food processing industry is the implementation of food safety management systems (FSMS). An FSMS is a set of procedures and policies designed to identify, assess, and control food safety hazards throughout the food manufacturing process. Implementing an FSMS can help businesses in the food processing industry identify and address potential hazards before they cause harm to consumers. Additionally, the implementation of an FSMS can help businesses comply with existing food safety regulations and guidelines. In conclusion, improving food quality and safety policies and management systems in Ethiopia's food processing industry is critical to protecting public health and enhancing the country's economy. Addressing the root causes of poor food quality and safety and implementing effective solutions, such as the establishment of regulatory agencies and the implementation of food safety management systems, can help to improve the overall safety and quality of the country's food supply.Keywords: food quality, food safety, policy, management system, food processing industry
Procedia PDF Downloads 8513256 Object Detection in Digital Images under Non-Standardized Conditions Using Illumination and Shadow Filtering
Authors: Waqqas-ur-Rehman Butt, Martin Servin, Marion Pause
Abstract:
In recent years, object detection has gained much attention and very encouraging research area in the field of computer vision. The robust object boundaries detection in an image is demanded in numerous applications of human computer interaction and automated surveillance systems. Many methods and approaches have been developed for automatic object detection in various fields, such as automotive, quality control management and environmental services. Inappropriately, to the best of our knowledge, object detection under illumination with shadow consideration has not been well solved yet. Furthermore, this problem is also one of the major hurdles to keeping an object detection method from the practical applications. This paper presents an approach to automatic object detection in images under non-standardized environmental conditions. A key challenge is how to detect the object, particularly under uneven illumination conditions. Image capturing conditions the algorithms need to consider a variety of possible environmental factors as the colour information, lightening and shadows varies from image to image. Existing methods mostly failed to produce the appropriate result due to variation in colour information, lightening effects, threshold specifications, histogram dependencies and colour ranges. To overcome these limitations we propose an object detection algorithm, with pre-processing methods, to reduce the interference caused by shadow and illumination effects without fixed parameters. We use the Y CrCb colour model without any specific colour ranges and predefined threshold values. The segmented object regions are further classified using morphological operations (Erosion and Dilation) and contours. Proposed approach applied on a large image data set acquired under various environmental conditions for wood stack detection. Experiments show the promising result of the proposed approach in comparison with existing methods.Keywords: image processing, illumination equalization, shadow filtering, object detection
Procedia PDF Downloads 21613255 Sarcasm Recognition System Using Hybrid Tone-Word Spotting Audio Mining Technique
Authors: Sandhya Baskaran, Hari Kumar Nagabushanam
Abstract:
Sarcasm sentiment recognition is an area of natural language processing that is being probed into in the recent times. Even with the advancements in NLP, typical translations of words, sentences in its context fail to provide the exact information on a sentiment or emotion of a user. For example, if something bad happens, the statement ‘That's just what I need, great! Terrific!’ is expressed in a sarcastic tone which could be misread as a positive sign by any text-based analyzer. In this paper, we are presenting a unique real time ‘word with its tone’ spotting technique which would provide the sentiment analysis for a tone or pitch of a voice in combination with the words being expressed. This hybrid approach increases the probability for identification of special sentiment like sarcasm much closer to the real world than by mining text or speech individually. The system uses a tone analyzer such as YIN-FFT which extracts pitch segment-wise that would be used in parallel with a speech recognition system. The clustered data is classified for sentiments and sarcasm score for each of it determined. Our Simulations demonstrates the improvement in f-measure of around 12% compared to existing detection techniques with increased precision and recall.Keywords: sarcasm recognition, tone-word spotting, natural language processing, pitch analyzer
Procedia PDF Downloads 29313254 The Effect of Information Technologies on Business Performance: An Application on Small Hotels
Authors: Abdullah Karaman, Kursad Sayin
Abstract:
In this research, which information technologies are used in small hotel businesses, and the information technologies-performance perception of the managers are pointed out. During the research, the questionnaire was prepared and the small scale hotel managers were interviewed face to face and they filled out the questionnaire and the answers acquired were evaluated. As the result of the research, it was obtained that the managers do not care much about the information technologies usage in practice even though they accepted that the information technologies are important in terms of performance.Keywords: information technologies, managers, performance, small hotels
Procedia PDF Downloads 48913253 Results of EPR Dosimetry Study of Population Residing in the Vicinity of the Uranium Mines and Uranium Processing Plant
Authors: K. Zhumadilov, P. Kazymbet, A. Ivannikov, M. Bakhtin, A. Akylbekov, K. Kadyrzhanov, A. Morzabayev, M. Hoshi
Abstract:
The aim of the study is to evaluate the possible excess of dose received by uranium processing plant workers. The possible excess of dose of workers was evaluated with comparison with population pool (Stepnogorsk) and control pool (Astana city). The measured teeth samples were extracted according to medical indications. In total, twenty-seven tooth enamel samples were analyzed from the residents of Stepnogorsk city (180 km from Astana city, Kazakhstan). About 6 tooth samples were collected from the workers of uranium processing plant. The results of tooth enamel dose estimation show us small influence of working conditions to workers, the maximum excess dose is less than 100 mGy. This is pilot study of EPR dose estimation and for a final conclusion additional sample is required.Keywords: EPR dose, workers, uranium mines, tooth samples
Procedia PDF Downloads 41113252 Selecting Answers for Questions with Multiple Answer Choices in Arabic Question Answering Based on Textual Entailment Recognition
Authors: Anes Enakoa, Yawei Liang
Abstract:
Question Answering (QA) system is one of the most important and demanding tasks in the field of Natural Language Processing (NLP). In QA systems, the answer generation task generates a list of candidate answers to the user's question, in which only one answer is correct. Answer selection is one of the main components of the QA, which is concerned with selecting the best answer choice from the candidate answers suggested by the system. However, the selection process can be very challenging especially in Arabic due to its particularities. To address this challenge, an approach is proposed to answer questions with multiple answer choices for Arabic QA systems based on Textual Entailment (TE) recognition. The developed approach employs a Support Vector Machine that considers lexical, semantic and syntactic features in order to recognize the entailment between the generated hypotheses (H) and the text (T). A set of experiments has been conducted for performance evaluation and the overall performance of the proposed method reached an accuracy of 67.5% with C@1 score of 80.46%. The obtained results are promising and demonstrate that the proposed method is effective for TE recognition task.Keywords: information retrieval, machine learning, natural language processing, question answering, textual entailment
Procedia PDF Downloads 14513251 Image Processing on Geosynthetic Reinforced Layers to Evaluate Shear Strength and Variations of the Strain Profiles
Authors: S. K. Khosrowshahi, E. Güler
Abstract:
This study investigates the reinforcement function of geosynthetics on the shear strength and strain profile of sand. Conducting a series of simple shear tests, the shearing behavior of the samples under static and cyclic loads was evaluated. Three different types of geosynthetics including geotextile and geonets were used as the reinforcement materials. An image processing analysis based on the optical flow method was performed to measure the lateral displacements and estimate the shear strains. It is shown that besides improving the shear strength, the geosynthetic reinforcement leads a remarkable reduction on the shear strains. The improved layer reduces the required thickness of the soil layer to resist against shear stresses. Consequently, the geosynthetic reinforcement can be considered as a proper approach for the sustainable designs, especially in the projects with huge amount of geotechnical applications like subgrade of the pavements, roadways, and railways.Keywords: image processing, soil reinforcement, geosynthetics, simple shear test, shear strain profile
Procedia PDF Downloads 22013250 Hybrid Algorithm for Non-Negative Matrix Factorization Based on Symmetric Kullback-Leibler Divergence for Signal Dependent Noise: A Case Study
Authors: Ana Serafimovic, Karthik Devarajan
Abstract:
Non-negative matrix factorization approximates a high dimensional non-negative matrix V as the product of two non-negative matrices, W and H, and allows only additive linear combinations of data, enabling it to learn parts with representations in reality. It has been successfully applied in the analysis and interpretation of high dimensional data arising in neuroscience, computational biology, and natural language processing, to name a few. The objective of this paper is to assess a hybrid algorithm for non-negative matrix factorization with multiplicative updates. The method aims to minimize the symmetric version of Kullback-Leibler divergence known as intrinsic information and assumes that the noise is signal-dependent and that it originates from an arbitrary distribution from the exponential family. It is a generalization of currently available algorithms for Gaussian, Poisson, gamma and inverse Gaussian noise. We demonstrate the potential usefulness of the new generalized algorithm by comparing its performance to the baseline methods which also aim to minimize symmetric divergence measures.Keywords: non-negative matrix factorization, dimension reduction, clustering, intrinsic information, symmetric information divergence, signal-dependent noise, exponential family, generalized Kullback-Leibler divergence, dual divergence
Procedia PDF Downloads 24613249 The Effect of Irgafos 168 in the Thermostabilization of High Density Polyethylene
Authors: Mahdi Almaky
Abstract:
The thermostabilization of High Density Polyethylene (HDPE) is realized through the action of primary antioxidant such as phenolic antioxidants and secondary antioxidants as aryl phosphates. The efficiency of two secondary antioxidants, commercially named Irgafos 168 and Weston 399, was investigated using different physical, mechanical, spectroscopic, and calorimetric methods. The effect of both antioxidants on the processing stability and long term stability of HDPE produced in Ras Lanuf oil and gas processing Company were measured and compared. The combination of Irgafos 168 with Irganox 1010, as used in smaller concentration, results in a synergetic effect against thermo-oxidation and protect better than the combination of Weston 399 with Irganox 1010 against the colour change at processing temperature and during long term oxidation process.Keywords: thermostabilization, high density polyethylene, primary antioxidant, phenolic antioxidant, Irgafos 168, Irganox 1010, Weston 399
Procedia PDF Downloads 35413248 Printed Thai Character Recognition Using Particle Swarm Optimization Algorithm
Authors: Phawin Sangsuvan, Chutimet Srinilta
Abstract:
This Paper presents the applications of Particle Swarm Optimization (PSO) Method for Thai optical character recognition (OCR). OCR consists of the pre-processing, character recognition and post-processing. Before enter into recognition process. The Character must be “Prepped” by pre-processing process. The PSO is an optimization method that belongs to the swarm intelligence family based on the imitation of social behavior patterns of animals. Route of each particle is determined by an individual data among neighborhood particles. The interaction of the particles with neighbors is the advantage of Particle Swarm to determine the best solution. So PSO is interested by a lot of researchers in many difficult problems including character recognition. As the previous this research used a Projection Histogram to extract printed digits features and defined the simple Fitness Function for PSO. The results reveal that PSO gives 67.73% for testing dataset. So in the future there can be explored enhancement the better performance of PSO with improve the Fitness Function.Keywords: character recognition, histogram projection, particle swarm optimization, pattern recognition techniques
Procedia PDF Downloads 47713247 Exploiting JPEG2000 into Reversible Information
Authors: Te-Jen Chang, I-Hui Pan, Kuang-Hsiung Tan, Shan-Jen Cheng, Chien-Wu Lan, Chih-Chan Hu
Abstract:
With the event of multimedia age in order to protect data not to be tampered, damaged, and faked, information hiding technologies are proposed. Information hiding means important secret information is hidden into cover multimedia and then camouflaged media is produced. This camouflaged media has the characteristic of natural protection. Under the undoubted situation, important secret information is transmitted out.Reversible information hiding technologies for high capacity is proposed in this paper. The gray images are as cover media in this technology. We compress gray images and compare with the original image to produce the estimated differences. By using the estimated differences, expression information hiding is used, and higher information capacity can be achieved. According to experimental results, the proposed technology can be approved. For these experiments, the whole capacity of information payload and image quality can be satisfied.Keywords: cover media, camouflaged media, reversible information hiding, gray image
Procedia PDF Downloads 32713246 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine
Authors: Adriana Haulica
Abstract:
Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics
Procedia PDF Downloads 7013245 Freedom of Information and Freedom of Expression
Authors: Amin Pashaye Amiri
Abstract:
Freedom of information, according to which the public has a right to have access to government-held information, is largely considered as a tool for improving transparency and accountability in governments, and as a requirement of self-governance and good governance. So far, more than ninety countries have recognized citizens’ right to have access to public information. This recognition often took place through the adoption of an act referred to as “freedom of information act”, “access to public records act”, and so on. A freedom of information act typically imposes a positive obligation on a government to initially and regularly release certain public information, and also obliges it to provide individuals with information they request. Such an act usually allows governmental bodies to withhold information only when it falls within a limited number of exemptions enumerated in the act such as exemptions for protecting privacy of individuals and protecting national security. Some steps have been taken at the national and international level towards the recognition of freedom of information as a human right. Freedom of information was recognized in a few countries as a part of freedom of expression, and therefore, as a human right. Freedom of information was also recognized by some international bodies as a human right. The Inter-American Court of Human Rights ruled in 2006 that Article 13 of the American Convention on Human Rights, which concerns the human right to freedom of expression, protects the right of all people to request access to government information. The European Court of Human Rights has recently taken a considerable step towards recognizing freedom of information as a human right. However, in spite of the measures that have been taken, public access to government information is not yet widely accepted as an international human right. The paper will consider the degree to which freedom of information has been recognized as a human right, and study the possibility of widespread recognition of such a human right in the future. It will also examine the possible benefits of such recognition for the development of the human right to free expression.Keywords: freedom of information, freedom of expression, human rights, government information
Procedia PDF Downloads 54813244 The Quality of Accounting Information of Private Companies in the Czech Republic
Authors: Kateřina Struhařová
Abstract:
The paper gives the evidence of quality of accounting information of Czech private companies. In general the private companies in the Czech Republic do not see the benefits of providing accounting information of high quality. Based on the research of financial statements of entrepreneurs and companies in Zlin region it was confirmed that the quality of accounting information differs among the private entities and that the major impact on the accounting information quality has the fact if the financial statements are audited as well as the size of the entity. Also the foreign shareholders and lenders have some impact on the accounting information quality.Keywords: accounting information quality, financial statements, Czech Republic, private companies
Procedia PDF Downloads 30413243 Parallel Vector Processing Using Multi Level Orbital DATA
Authors: Nagi Mekhiel
Abstract:
Many applications use vector operations by applying single instruction to multiple data that map to different locations in conventional memory. Transferring data from memory is limited by access latency and bandwidth affecting the performance gain of vector processing. We present a memory system that makes all of its content available to processors in time so that processors need not to access the memory, we force each location to be available to all processors at a specific time. The data move in different orbits to become available to other processors in higher orbits at different time. We use this memory to apply parallel vector operations to data streams at first orbit level. Data processed in the first level move to upper orbit one data element at a time, allowing a processor in that orbit to apply another vector operation to deal with serial code limitations inherited in all parallel applications and interleaved it with lower level vector operations.Keywords: Memory Organization, Parallel Processors, Serial Code, Vector Processing
Procedia PDF Downloads 27013242 Coupling Large Language Models with Disaster Knowledge Graphs for Intelligent Construction
Authors: Zhengrong Wu, Haibo Yang
Abstract:
In the context of escalating global climate change and environmental degradation, the complexity and frequency of natural disasters are continually increasing. Confronted with an abundance of information regarding natural disasters, traditional knowledge graph construction methods, which heavily rely on grammatical rules and prior knowledge, demonstrate suboptimal performance in processing complex, multi-source disaster information. This study, drawing upon past natural disaster reports, disaster-related literature in both English and Chinese, and data from various disaster monitoring stations, constructs question-answer templates based on large language models. Utilizing the P-Tune method, the ChatGLM2-6B model is fine-tuned, leading to the development of a disaster knowledge graph based on large language models. This serves as a knowledge database support for disaster emergency response.Keywords: large language model, knowledge graph, disaster, deep learning
Procedia PDF Downloads 5613241 A Systematic Review of Sensory Processing Patterns of Children with Autism Spectrum Disorders
Authors: Ala’a F. Jaber, Bara’ah A. Bsharat, Noor T. Ismael
Abstract:
Background: Sensory processing is a fundamental skill needed for the successful performance of daily living activities. These skills are impaired as parts of the neurodevelopmental process issues among children with autism spectrum disorder (ASD). This systematic review aimed to summarize the evidence on the differences in sensory processing and motor characteristic between children with ASD and children with TD. Method: This systematic review followed the guidelines of the preferred reporting items for systematic reviews and meta-analysis. The search terms included sensory, motor, condition, and child-related terms or phrases. The electronic search utilized Academic Search Ultimate, CINAHL Plus with Full Text, ERIC, MEDLINE, MEDLINE Complete, Psychology, and Behavioral Sciences Collection, and SocINDEX with full-text databases. The hand search included looking for potential studies in the references of related studies. The inclusion criteria included studies published in English between years 2009-2020 that included children aged 3-18 years with a confirmed ASD diagnosis, according to the DSM-V criteria, included a control group of typical children, included outcome measures related to the sensory processing and/or motor functions, and studies available in full-text. The review of included studies followed the Oxford Centre for Evidence-Based Medicine guidelines, and the Guidelines for Critical Review Form of Quantitative Studies, and the guidelines for conducting systematic reviews by the American Occupational Therapy Association. Results: Eighty-eight full-text studies related to the differences between children with ASD and children with TD in terms of sensory processing and motor characteristics were reviewed, of which eighteen articles were included in the quantitative synthesis. The results reveal that children with ASD had more extreme sensory processing patterns than children with TD, like hyper-responsiveness and hypo-responsiveness to sensory stimuli. Also, children with ASD had limited gross and fine motor abilities and lower strength, endurance, balance, eye-hand coordination, movement velocity, cadence, dexterity with a higher rate of gait abnormalities than children with TD. Conclusion: This systematic review provided preliminary evidence suggesting that motor functioning should be addressed in the evaluation and intervention for children with ASD, and sensory processing should be supported among children with TD. More future research should investigate whether how the performance and engagement in daily life activities are affected by sensory processing and motor skills.Keywords: sensory processing, occupational therapy, children, motor skills
Procedia PDF Downloads 12813240 Robust and Real-Time Traffic Counting System
Authors: Hossam M. Moftah, Aboul Ella Hassanien
Abstract:
In the recent years the importance of automatic traffic control has increased due to the traffic jams problem especially in big cities for signal control and efficient traffic management. Traffic counting as a kind of traffic control is important to know the road traffic density in real time. This paper presents a fast and robust traffic counting system using different image processing techniques. The proposed system is composed of the following four fundamental building phases: image acquisition, pre-processing, object detection, and finally counting the connected objects. The object detection phase is comprised of the following five steps: subtracting the background, converting the image to binary, closing gaps and connecting nearby blobs, image smoothing to remove noises and very small objects, and detecting the connected objects. Experimental results show the great success of the proposed approach.Keywords: traffic counting, traffic management, image processing, object detection, computer vision
Procedia PDF Downloads 29413239 Development of Electronic Services in Georgia: Analysis of Current Situation
Authors: Dato Surmanidze, Dato Antadze, Tornike Partenadze
Abstract:
Public online services in Georgia are concentrated on main target segments: public administration, business, population, non-governmental and other interested organizations. Therefore, the strategy of digital Georgia is focused on providing G2C, G2B/B2G, G2NGO and G2G services. In G2C framework sophisticated and high-technological online services have been developed in order to provide passports, identity cards, documentations concerning residence and civil acts (birth, marriage, divorce, child adoption, change of name and surname, death, etc) as well as other services. Websites like my.gov.ge and sda.gov.ge have distance services like electronic application, processing and decision making. In line with international standards automatic services like electronic tenders, product catalogues, invoices and payment have been developed. This creates better investment climate for foreign companies in Georgia in the framework of G2B politics. The website mybusiness.gov.ge creates better conditions for local business. Among electronic services is e-NRMS (electronic system for national resource management) which was introduced by the Ministry of Finance of Georgia. The system was created in order to ensure management of national resources by state and business organizations. It is integrated with bank services and provides G2C, G2B and B2G representatives with electronic services. Also a portal meteo.gov.ge was created which gives electronic services concerning air, geological, environmental and pollution issues. Also worknet.gov.ge should be mentioned which is an electronic hub of information management for employers and employees. The information portal of labor market will facilitate receipt of information, its analysis and delivery to interested people like employers and employees. However, nowadays it’s been two years that only employees portal is activated. Therefore, awareness about the portal, its competitiveness and success is undermined.Keywords: electronic services, public administration, information technology, information society
Procedia PDF Downloads 26813238 Arabic Light Word Analyser: Roles with Deep Learning Approach
Authors: Mohammed Abu Shquier
Abstract:
This paper introduces a word segmentation method using the novel BP-LSTM-CRF architecture for processing semantic output training. The objective of web morphological analysis tools is to link a formal morpho-syntactic description to a lemma, along with morpho-syntactic information, a vocalized form, a vocalized analysis with morpho-syntactic information, and a list of paradigms. A key objective is to continuously enhance the proposed system through an inductive learning approach that considers semantic influences. The system is currently under construction and development based on data-driven learning. To evaluate the tool, an experiment on homograph analysis was conducted. The tool also encompasses the assumption of deep binary segmentation hypotheses, the arbitrary choice of trigram or n-gram continuation probabilities, language limitations, and morphology for both Modern Standard Arabic (MSA) and Dialectal Arabic (DA), which provide justification for updating this system. Most Arabic word analysis systems are based on the phonotactic morpho-syntactic analysis of a word transmitted using lexical rules, which are mainly used in MENA language technology tools, without taking into account contextual or semantic morphological implications. Therefore, it is necessary to have an automatic analysis tool taking into account the word sense and not only the morpho-syntactic category. Moreover, they are also based on statistical/stochastic models. These stochastic models, such as HMMs, have shown their effectiveness in different NLP applications: part-of-speech tagging, machine translation, speech recognition, etc. As an extension, we focus on language modeling using Recurrent Neural Network (RNN); given that morphological analysis coverage was very low in dialectal Arabic, it is significantly important to investigate deeply how the dialect data influence the accuracy of these approaches by developing dialectal morphological processing tools to show that dialectal variability can support to improve analysis.Keywords: NLP, DL, ML, analyser, MSA, RNN, CNN
Procedia PDF Downloads 4213237 Correlation between Funding and Publications: A Pre-Step towards Future Research Prediction
Authors: Ning Kang, Marius Doornenbal
Abstract:
Funding is a very important – if not crucial – resource for research projects. Usually, funding organizations will publish a description of the funded research to describe the scope of the funding award. Logically, we would expect research outcomes to align with this funding award. For that reason, we might be able to predict future research topics based on present funding award data. That said, it remains to be shown if and how future research topics can be predicted by using the funding information. In this paper, we extract funding project information and their generated paper abstracts from the Gateway to Research database as a group, and use the papers from the same domains and publication years in the Scopus database as a baseline comparison group. We annotate both the project awards and the papers resulting from the funded projects with linguistic features (noun phrases), and then calculate tf-idf and cosine similarity between these two set of features. We show that the cosine similarity between the project-generated papers group is bigger than the project-baseline group, and also that these two groups of similarities are significantly different. Based on this result, we conclude that the funding information actually correlates with the content of future research output for the funded project on the topical level. How funding really changes the course of science or of scientific careers remains an elusive question.Keywords: natural language processing, noun phrase, tf-idf, cosine similarity
Procedia PDF Downloads 24613236 Efficient Manageability and Intelligent Classification of Web Browsing History Using Machine Learning
Authors: Suraj Gururaj, Sumantha Udupa U.
Abstract:
Browsing the Web has emerged as the de facto activity performed on the Internet. Although browsing gets tracked, the manageability aspect of Web browsing history is very poor. In this paper, we have a workable solution implemented by using machine learning and natural language processing techniques for efficient manageability of user’s browsing history. The significance of adding such a capability to a Web browser is that it ensures efficient and quick information retrieval from browsing history, which currently is very challenging. Our solution guarantees that any important websites visited in the past can be easily accessible because of the intelligent and automatic classification. In a nutshell, our solution-based paper provides an implementation as a browser extension by intelligently classifying the browsing history into most relevant category automatically without any user’s intervention. This guarantees no information is lost and increases productivity by saving time spent revisiting websites that were of much importance.Keywords: adhoc retrieval, Chrome extension, supervised learning, tile, Web personalization
Procedia PDF Downloads 37613235 Intelligent Process and Model Applied for E-Learning Systems
Authors: Mafawez Alharbi, Mahdi Jemmali
Abstract:
E-learning is a developing area especially in education. E-learning can provide several benefits to learners. An intelligent system to collect all components satisfying user preferences is so important. This research presents an approach that it capable to personalize e-information and give the user their needs following their preferences. This proposal can make some knowledge after more evaluations made by the user. In addition, it can learn from the habit from the user. Finally, we show a walk-through to prove how intelligent process work.Keywords: artificial intelligence, architecture, e-learning, software engineering, processing
Procedia PDF Downloads 191