Search results for: portable document format
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1490

Search results for: portable document format

1370 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format

Authors: Maryam Fallahpoor, Biswajeet Pradhan

Abstract:

Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.

Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format

Procedia PDF Downloads 46
1369 Effect of Birks Constant and Defocusing Parameter on Triple-to-Double Coincidence Ratio Parameter in Monte Carlo Simulation-GEANT4

Authors: Farmesk Abubaker, Francesco Tortorici, Marco Capogni, Concetta Sutera, Vincenzo Bellini

Abstract:

This project concerns with the detection efficiency of the portable triple-to-double coincidence ratio (TDCR) at the National Institute of Metrology of Ionizing Radiation (INMRI-ENEA) which allows direct activity measurement and radionuclide standardization for pure-beta emitter or pure electron capture radionuclides. The dependency of the simulated detection efficiency of the TDCR, by using Monte Carlo simulation Geant4 code, on the Birks factor (kB) and defocusing parameter has been examined especially for low energy beta-emitter radionuclides such as 3H and 14C, for which this dependency is relevant. The results achieved in this analysis can be used for selecting the best kB factor and the defocusing parameter for computing theoretical TDCR parameter value. The theoretical results were compared with the available ones, measured by the ENEA TDCR portable detector, for some pure-beta emitter radionuclides. This analysis allowed to improve the knowledge of the characteristics of the ENEA TDCR detector that can be used as a traveling instrument for in-situ measurements with particular benefits in many applications in the field of nuclear medicine and in the nuclear energy industry.

Keywords: Birks constant, defocusing parameter, GEANT4 code, TDCR parameter

Procedia PDF Downloads 122
1368 To Determine the Effects of Regulatory Food Safety Inspections on the Grades of Different Categories of Retail Food Establishments across the Dubai Region

Authors: Shugufta Mohammad Zubair

Abstract:

This study explores the Effect of the new food System Inspection system also called the new inspection color card scheme on reduction of critical & major food safety violations in Dubai. Data was collected from all retail food service establishments located in two zones in the city. Each establishment was visited twice, once before the launch of the new system and one after the launch of the system. In each visit, the Inspection checklist was used as the evaluation tool for observation of the critical and major violations. The old format of the inspection checklist was concerned with scores based on the violations; but the new format of the checklist for the new inspection color card scheme is divided into administrative, general major and critical which gives a better classification for the inspectors to identify the critical and major violations of concerned. The study found that there has been a better and clear marking of violations after the launch of new inspection system wherein the inspectors are able to mark and categories the violations effectively. There had been a 10% decrease in the number of food establishment that was previously given A grade. The B & C grading were also considerably dropped by 5%.

Keywords: food inspection, risk assessment, color card scheme, violations

Procedia PDF Downloads 300
1367 An Evaluation Model for Automatic Map Generalization

Authors: Quynhan Tran, Hong Fan, Quockhanh Pham

Abstract:

Automatic map generalization is a well-known problem in cartography. The development of map generalization research accompanied the development of cartography. The traditional map is plotted manually by cartographic experts. The paper studies none-scale automation generalization of resident polygons and house marker symbol, proposes methodology to evaluate the result maps based on minimal spanning tree. In this paper, the minimal spanning tree before and after map generalization is compared to evaluate whether the generalization result maintain the geographical distribution of features. The minimal spanning tree in vector format is firstly converted into a raster format and the grid size is 2mm (distance on the map). The statistical number of matching grid before and after map generalization and the ratio of overlapping grid to the total grids is calculated. Evaluation experiments are conduct to verify the results. Experiments show that this methodology can give an objective evaluation for the feature distribution and give specialist an hand while they evaluate result maps of none-scale automation generalization with their eyes.

Keywords: automatic cartography generalization, evaluation model, geographic feature distribution, minimal spanning tree

Procedia PDF Downloads 608
1366 A Quantitative Evaluation of Text Feature Selection Methods

Authors: B. S. Harish, M. B. Revanasiddappa

Abstract:

Due to rapid growth of text documents in digital form, automated text classification has become an important research in the last two decades. The major challenge of text document representations are high dimension, sparsity, volume and semantics. Since the terms are only features that can be found in documents, selection of good terms (features) plays an very important role. In text classification, feature selection is a strategy that can be used to improve classification effectiveness, computational efficiency and accuracy. In this paper, we present a quantitative analysis of most widely used feature selection (FS) methods, viz. Term Frequency-Inverse Document Frequency (tfidf ), Mutual Information (MI), Information Gain (IG), CHISquare (x2), Term Frequency-Relevance Frequency (tfrf ), Term Strength (TS), Ambiguity Measure (AM) and Symbolic Feature Selection (SFS) to classify text documents. We evaluated all the feature selection methods on standard datasets like 20 Newsgroups, 4 University dataset and Reuters-21578.

Keywords: classifiers, feature selection, text classification

Procedia PDF Downloads 426
1365 Control Configuration System as a Key Element in Distributed Control System

Authors: Goodarz Sabetian, Sajjad Moshfe

Abstract:

Control system for hi-tech industries could be realized generally and deeply by a special document. Vast heavy industries such as power plants with a large number of I/O signals are controlled by a distributed control system (DCS). This system comprises of so many parts from field level to high control level, and junior instrument engineers may be confused by this enormous information. The key document which can solve this problem is “control configuration system diagram” for each type of DCS. This is a road map that covers all of activities respect to control system in each industrial plant and inevitable to be studied by whom corresponded. It plays an important role from designing control system start point until the end; deliver the system to operate. This should be inserted in bid documents, contracts, purchasing specification and used in different periods of project EPC (engineering, procurement, and construction). Separate parts of DCS are categorized here in order of importance and a brief description and some practical plan is offered. This article could be useful for all instrument and control engineers who worked is EPC projects.

Keywords: control, configuration, DCS, power plant, bus

Procedia PDF Downloads 466
1364 Applications of Visual Ethnography in Public Anthropology

Authors: Subramaniam Panneerselvam, Gunanithi Perumal, KP Subin

Abstract:

The Visual Ethnography is used to document the culture of a community through a visual means. It could be either photography or audio-visual documentation. The visual ethnographic techniques are widely used in visual anthropology. The visual anthropologists use the camera to capture the cultural image of the studied community. There is a scope for subjectivity while the culture is documented by an external person. But the upcoming of the public anthropology provides an opportunity for the participants to document their own culture. There is a need to equip the participants with the skill of doing visual ethnography. The mobile phone technology provides visual documentation facility to everyone to capture the moments instantly. The visual ethnography facilitates the multiple-interpretation for the audiences. This study explores the effectiveness of visual ethnography among the tribal youth through public anthropology perspective. The case study was conducted to equip the tribal youth of Nilgiris in visual ethnography and the outcome of the experiment shared in this paper.

Keywords: visual ethnography, visual anthropology, public anthropology, multiple-interpretation, case study

Procedia PDF Downloads 139
1363 Development of Distance Training Packages for Teacher on Education Management for Learners with Special Needs

Authors: Jareeluk Ratanaphan

Abstract:

The purposed of this research were; 1. To survey the teacher’s needs on knowledge about special education management for special needs student 2. Development of distance training packages for teacher on special education management for special needs student 3. to study the effects of using the packages on trainee’s achievement 4. to study the effects of using the packages on trainee’s opinion on the distance training packages. The design of the experiment was research and development. The research sample for survey were 86 teachers, and 22 teachers for study the effects of using the packages on achievement and opinion. The research instrument comprised: 1) training packages on special education management for special needs student 2) achievement test 3) questionnaire. Mean, percentage, standard deviation, t-test and content analysis were used for data analysis. The findings of the research were as follows: 1. The teacher’s needs on knowledge about teaching for a learner with learning disability, mental retardation, autism, physical and health impairment and research in special education. 2. The package composed of special education management for special needs student document and manual of distance training packages. The document consisted by the name of packages, the explanation for the educator, content’s structure, concept, objectives, content and activities. Manual of distance training packages consisted by the explanation about a document, objectives, explanation about using the package, training schedule, and evaluation. The efficiency of packages was established at 79.50/81.35. 3. The results of using the packages were the posttest average scores of trainee’s achievement were higher than the pretest. 4. The trainee’s opinion on the package was at the highest level.

Keywords: distance training package, teacher, learner with special needs

Procedia PDF Downloads 463
1362 Semantic Indexing Improvement for Textual Documents: Contribution of Classification by Fuzzy Association Rules

Authors: Mohsen Maraoui

Abstract:

In the aim of natural language processing applications improvement, such as information retrieval, machine translation, lexical disambiguation, we focus on statistical approach to semantic indexing for multilingual text documents based on conceptual network formalism. We propose to use this formalism as an indexing language to represent the descriptive concepts and their weighting. These concepts represent the content of the document. Our contribution is based on two steps. In the first step, we propose the extraction of index terms using the multilingual lexical resource Euro WordNet (EWN). In the second step, we pass from the representation of index terms to the representation of index concepts through conceptual network formalism. This network is generated using the EWN resource and pass by a classification step based on association rules model (in attempt to discover the non-taxonomic relations or contextual relations between the concepts of a document). These relations are latent relations buried in the text and carried by the semantic context of the co-occurrence of concepts in the document. Our proposed indexing approach can be applied to text documents in various languages because it is based on a linguistic method adapted to the language through a multilingual thesaurus. Next, we apply the same statistical process regardless of the language in order to extract the significant concepts and their associated weights. We prove that the proposed indexing approach provides encouraging results.

Keywords: concept extraction, conceptual network formalism, fuzzy association rules, multilingual thesaurus, semantic indexing

Procedia PDF Downloads 120
1361 Characterization of Shear and Extensional Rheology of Fibre Suspensions Prior to Atomization

Authors: Siti N. M. Rozali, A. H. J. Paterson, J. P. Hindmarsh

Abstract:

Spray drying of fruit juices from liquid to powder is desirable as the powders are easier to handle, especially for storage and transportation. In this project, pomace fibres will be used as a drying aid during spray drying, replacing the commonly used maltodextrins. The main attraction of this drying aid is that the pomace fibres are originally derived from the fruit itself. However, the addition of micro-sized fibres to fruit juices is expected to affect the rheology and subsequent atomization behaviour during the spray drying process. This study focuses on the determination and characterization of the rheology of juice-fibre suspensions specifically inside a spray dryer nozzle. Results show that the juice-fibre suspensions exhibit shear thinning behaviour with a significant extensional viscosity. The shear and extensional viscosities depend on several factors which include fibre fraction, shape, size and aspect ratio. A commercial capillary rheometer is used to characterize the shear behaviour while a portable extensional rheometer has been designed and built to study the extensional behaviour. Methods and equipment will be presented along with the rheology results. Rheology or behaviour of the juice-fibre suspensions provides an insight into the limitations that will be faced during atomization, and in the future, this finding will assist in choosing the best nozzle design that can overcome the limitations introduced by the fibre particles thus resulting in successful spray drying of juice-fibre suspensions.

Keywords: extensional rheology, fibre suspensions, portable extensional rheometer, shear rheology

Procedia PDF Downloads 180
1360 Understanding Embryology in Promoting Peace Leadership: A Document Review

Authors: Vasudev Das

Abstract:

The specific problem is that many leaders of the 21st century do not understand that the extermination of embryos wreaks havoc on peace leadership. The purpose of the document review is to understand embryology in facilitating peace leadership. Extermination of human embryos generates a requital wave of violence which later falls on human society in the form of disturbances, considering that violence breeds further violence as a consequentiality. The study results reveal that a deep understanding of embryology facilitates peace leadership, given that minimizing embryo extermination enhances non-violence in the global village. Neo-Newtonians subscribe to the idea that every action has an equal and opposite reaction. The US Federal Government recognizes the embryo or fetus as a member of Homo sapiens. The social change implications of this study are that understanding human embryology promotes peace leadership, considering that the consequentiality of embryo extermination can serve as a deterrent for violence on embryos.

Keywords: consequentiality, Homo sapiens, neo-Newtonians, violence

Procedia PDF Downloads 108
1359 Adaptation of Projection Profile Algorithm for Skewed Handwritten Text Line Detection

Authors: Kayode A. Olaniyi, Tola. M. Osifeko, Adeola A. Ogunleye

Abstract:

Text line segmentation is an important step in document image processing. It represents a labeling process that assigns the same label using distance metric probability to spatially aligned units. Text line detection techniques have successfully been implemented mainly in printed documents. However, processing of the handwritten texts especially unconstrained documents has remained a key problem. This is because the unconstrained hand-written text lines are often not uniformly skewed. The spaces between text lines may not be obvious, complicated by the nature of handwriting and, overlapping ascenders and/or descenders of some characters. Hence, text lines detection and segmentation represents a leading challenge in handwritten document image processing. Text line detection methods that rely on the traditional global projection profile of the text document cannot efficiently confront with the problem of variable skew angles between different text lines. Hence, the formulation of a horizontal line as a separator is often not efficient. This paper presents a technique to segment a handwritten document into distinct lines of text. The proposed algorithm starts, by partitioning the initial text image into columns, across its width into chunks of about 5% each. At each vertical strip of 5%, the histogram of horizontal runs is projected. We have worked with the assumption that text appearing in a single strip is almost parallel to each other. The algorithm developed provides a sliding window through the first vertical strip on the left side of the page. It runs through to identify the new minimum corresponding to a valley in the projection profile. Each valley would represent the starting point of the orientation line and the ending point is the minimum point on the projection profile of the next vertical strip. The derived text-lines traverse around any obstructing handwritten vertical strips of connected component by associating it to either the line above or below. A decision of associating such connected component is made by the probability obtained from a distance metric decision. The technique outperforms the global projection profile for text line segmentation and it is robust to handle skewed documents and those with lines running into each other.

Keywords: connected-component, projection-profile, segmentation, text-line

Procedia PDF Downloads 93
1358 Trends in Language Testing in Primary Schools in River State, Nigeria

Authors: Okoh Chinasa, Asimuonye Augusta

Abstract:

This study investigated the trends in language testing in Primary Schools in Rivers State. English language past question papers were collected from four (4) Primary Schools in Onelga Local Government Area and Ahoada East Local Government Area. Four research questions guided the study. The study is aimed at finding out the appropriateness of test formats used for language testing and the language skills tested. The past question papers collected which served as the instrument were analyzed based on given criteria developed by the researchers in line with documentary frequency studies, a type of survey study. The study revealed that some of the four language skills were not adequately assessed and that the termly question papers were developed by a central examination body. From the past questions, it was observed that an imbalance exists in the test format used. The paper recommended that all the language skills should be tested using correct test formats to ensure that pupils were given a fair chance to show what they know and can do in English language and for teachers to be able to use the test results for effective decision making.

Keywords: discrete test, integrative test, testing approach, test format

Procedia PDF Downloads 391
1357 Identifying the Goals of a Multicultural Curriculum for the Primary Education Course

Authors: Fatemeh Havas Beigi

Abstract:

The purpose of this study is to identify the objectives of a multicultural curriculum for the primary education period from the perspective of ethnic teachers and education experts and cultural professionals. The research paradigm is interpretive, the research approach is qualitative, the research strategy is content analysis, the sampling method is purposeful and it is a snowball, and the sample of informants in the research for Iranian ethnic teachers and experts until the theoretical saturation was estimated to be 67 people. The data collection tools used were based on semi-structured interviews and individual interviews and focal interviews were used to collect information. The data format was also in audio format and the first period coding and the second coding were used to analyze the data. Based on data analysis 11 Objective: Paying attention to ethnic equality, expanding educational opportunities and justice, peaceful coexistence, anti-ethnic and racial discrimination education, paying attention to human value and dignity, accepting religious diversity, getting to know ethnicities and cultures, promoting teaching-learning, fostering self-confidence, building national unity, and developing cultural commonalities for a multicultural curriculum were identified.

Keywords: objective, multicultural curriculum, connect, elementary education period

Procedia PDF Downloads 63
1356 An Innovative High Energy Density Power Pack for Portable and Off-Grid Power Applications

Authors: Idit Avrahami, Alex Schechter, Lev Zakhvatkin

Abstract:

This research focuses on developing a compact and light Hydrogen Generator (HG), coupled with fuel cells (FC) to provide a High-Energy-Density Power-Pack (HEDPP) solution, which is 10 times Li-Ion batteries. The HEDPP is designed for portable & off-grid power applications such as Drones, UAVs, stationary off-grid power sources, unmanned marine vehicles, and more. Hydrogen gas provided by this device is delivered in the safest way as a chemical powder at room temperature and ambient pressure is activated only when the power is on. Hydrogen generation is based on a stabilized chemical reaction of Sodium Borohydride (SBH) and water. The proposed solution enables a ‘No Storage’ Hydrogen-based Power Pack. Hydrogen is produced and consumed on-the-spot, during operation; therefore, there’s no need for high-pressure hydrogen tanks, which are large, heavy, and unsafe. In addition to its high energy density, ease of use, and safety, the presented power pack has a significant advantage of versatility and deployment in numerous applications and scales. This patented HG was demonstrated using several prototypes in our lab and was proved to be feasible and highly efficient for several applications. For example, in applications where water is available (such as marine vehicles, water and sewage infrastructure, and stationary applications), the Energy Density of the suggested power pack may reach 2700-3000 Wh/kg, which is again more than 10 times higher than conventional lithium-ion batteries. In other applications (e.g., UAV or small vehicles) the energy density may exceed 1000 Wh/kg.

Keywords: hydrogen energy, sodium borohydride, fixed-wing UAV, energy pack

Procedia PDF Downloads 57
1355 Teaching English in Low Resource-Environments: Problems and Prospects

Authors: Gift Chidi-Onwuta, Iwe Nkem Nkechinyere, Chikamadu Christabelle Chinyere

Abstract:

The teaching of English is a resource-driven activity that requires rich resource-classroom settings for the delivery of effective lessons and the acquisition of interpersonal skills for integration in a target-language environment. However, throughout the world, English is often taught in low-resource classrooms. This paper is aimed to reveal the common problems associated with teaching English in low-resource environments and the prospects for teachers who found themselves in such undefined teaching settings. Self-structured and validated questionnaire in a closed-ended format, open question format and scaling format was administered to teachers across five countries: Nigeria, Cameroun, Iraq, Turkey, and Sudan. The study adopts situational language teaching theory (SLTT), which emphasizes a performance improvement imperative. This study inclines to this model because it maintains that learning must be fun and enjoyable like playing a favorite sport, just as in real life. Since teaching resources make learning engaging, we found this model apt for the current study. The perceptions of teachers about accessibility and functionality of teaching material resources, the nature of teaching outcomes in resource-less environments, their levels of involvement in improvisation and the prospects associated with resource limitations were sourced. Data were analysed using percentages and presented in frequency tables. Results: showed that a greater number of teachers across these nations do not have access to sufficient productive resource materials that can aid effective English language teaching. Teaching outcomes, from the findings, are affected by low material resources; however, results show certain advantages to teaching English with limited resources: flexibility and autonomy with students and creativity and innovation amongst teachers. Results further revealed group work, story, critical thinking strategy, flex, cardboards and flashcards, dictation and dramatization as common teaching strategies, as well as materials adopted by teachers to overcome low resource-related challenges in classrooms.

Keywords: teaching materials, low-resource environments, English language teaching, situational language theory

Procedia PDF Downloads 104
1354 Analysis on the Need of Engineering Drawing and Feasibility Study on 3D Model Based Engineering Implementation

Authors: Parthasarathy J., Ramshankar C. S.

Abstract:

Engineering drawings these days play an important role in every part of an industry. By and large, Engineering drawings are influential over every phase of the product development process. Traditionally, drawings are used for communication in industry because they are the clearest way to represent the product manufacturing information. Until recently, manufacturing activities were driven by engineering data captured in 2D paper documents or digital representations of those documents. The need of engineering drawing is inevitable. Still Engineering drawings are disadvantageous in re-entry of data throughout manufacturing life cycle. This document based approach is prone to errors and requires costly re-entry of data at every stage in the manufacturing life cycle. So there is a requirement to eliminate Engineering drawings throughout product development process and to implement 3D Model Based Engineering (3D MBE or 3D MBD). Adopting MBD appears to be the next logical step to continue reducing time-to-market and improve product quality. Ideally, by fully applying the MBD concept, the product definition will no longer rely on engineering drawings throughout the product lifecycle. This project addresses the need of Engineering drawing and its influence in various parts of an industry and the need to implement the 3D Model Based Engineering with its advantages and the technical barriers that must be overcome in order to implement 3D Model Based Engineering. This project also addresses the requirements of neutral formats and its realisation in order to implement the digital product definition principles in a light format. In order to prove the concepts of 3D Model Based Engineering, the screw jack body part is also demonstrated. At ZF Windpower Coimbatore Limited, 3D Model Based Definition is implemented to Torque Arm (Machining and Casting), Steel tube, Pinion shaft, Cover, Energy tube.

Keywords: engineering drawing, model based engineering MBE, MBD, CAD

Procedia PDF Downloads 406
1353 Predicting Success and Failure in Drug Development Using Text Analysis

Authors: Zhi Hao Chow, Cian Mulligan, Jack Walsh, Antonio Garzon Vico, Dimitar Krastev

Abstract:

Drug development is resource-intensive, time-consuming, and increasingly expensive with each developmental stage. The success rates of drug development are also relatively low, and the resources committed are wasted with each failed candidate. As such, a reliable method of predicting the success of drug development is in demand. The hypothesis was that some examples of failed drug candidates are pushed through developmental pipelines based on false confidence and may possess common linguistic features identifiable through sentiment analysis. Here, the concept of using text analysis to discover such features in research publications and investor reports as predictors of success was explored. R studios were used to perform text mining and lexicon-based sentiment analysis to identify affective phrases and determine their frequency in each document, then using SPSS to determine the relationship between our defined variables and the accuracy of predicting outcomes. A total of 161 publications were collected and categorised into 4 groups: (i) Cancer treatment, (ii) Neurodegenerative disease treatment, (iii) Vaccines, and (iv) Others (containing all other drugs that do not fit into the 3 categories). Text analysis was then performed on each document using 2 separate datasets (BING and AFINN) in R within the category of drugs to determine the frequency of positive or negative phrases in each document. A relative positivity and negativity value were then calculated by dividing the frequency of phrases with the word count of each document. Regression analysis was then performed with SPSS statistical software on each dataset (values from using BING or AFINN dataset during text analysis) using a random selection of 61 documents to construct a model. The remaining documents were then used to determine the predictive power of the models. Model constructed from BING predicts the outcome of drug performance in clinical trials with an overall percentage of 65.3%. AFINN model had a lower accuracy at predicting outcomes compared to the BING model at 62.5% but was not effective at predicting the failure of drugs in clinical trials. Overall, the study did not show significant efficacy of the model at predicting outcomes of drugs in development. Many improvements may need to be made to later iterations of the model to sufficiently increase the accuracy.

Keywords: data analysis, drug development, sentiment analysis, text-mining

Procedia PDF Downloads 125
1352 Considerations upon Structural Health Monitoring of Small to Medium Wind Turbines

Authors: Nicolae Constantin, Ştefan Sorohan

Abstract:

The small and medium wind turbines are running in quite different conditions as compared to the big ones. Consequently, they need also a different approach concerning the structural health monitoring (SHM) issues. There are four main differences between the above mentioned categories: (i) significantly smaller dimensions, (ii) considerably higher rotation speed, (iii) generally small distance between the turbine and the energy consumer and (iv) monitoring assumed in many situations by the owner. In such conditions, nondestructive inspections (NDI) have to be made as much as possible with affordable, yet effective techniques, requiring portable and accessible equipment. Additionally, the turbines and accessories should be easy to mount, dispose and repair. As the materials used for such unit can be metals, composites and combined, the technologies should be adapted accordingly. An example in which the two materials co-exist is the situation in which the damaged metallic skin of a blade is repaired with a composite patch. The paper presents the inspection of the bonding state of the patch, using portable ultrasonic equipment, able to put in place the Lamb wave method, which proves efficient in global and local inspections as well. The equipment is relatively easy to handle and can be borrowed from specialized laboratories or used by a community of small wind turbine users, upon the case. This evaluation is the first in a row, aimed to evaluate efficiency of NDI performed with rather accessible, less sophisticated equipment and related inspection techniques, having field inspection capabilities. The main goal is to extend such inspection procedures to other components of the wind power unit, such as the support tower, water storage tanks, etc.

Keywords: structural health monitoring, small wind turbines, non-destructive inspection, field inspection capabilities

Procedia PDF Downloads 314
1351 Efficient Layout-Aware Pretraining for Multimodal Form Understanding

Authors: Armineh Nourbakhsh, Sameena Shah, Carolyn Rose

Abstract:

Layout-aware language models have been used to create multimodal representations for documents that are in image form, achieving relatively high accuracy in document understanding tasks. However, the large number of parameters in the resulting models makes building and using them prohibitive without access to high-performing processing units with large memory capacity. We propose an alternative approach that can create efficient representations without the need for a neural visual backbone. This leads to an 80% reduction in the number of parameters compared to the smallest SOTA model, widely expanding applicability. In addition, our layout embeddings are pre-trained on spatial and visual cues alone and only fused with text embeddings in downstream tasks, which can facilitate applicability to low-resource of multi-lingual domains. Despite using 2.5% of training data, we show competitive performance on two form understanding tasks: semantic labeling and link prediction.

Keywords: layout understanding, form understanding, multimodal document understanding, bias-augmented attention

Procedia PDF Downloads 121
1350 The Use of Voice in Online Public Access Catalog as Faster Searching Device

Authors: Maisyatus Suadaa Irfana, Nove Eka Variant Anna, Dyah Puspitasari Sri Rahayu

Abstract:

Technological developments provide convenience to all the people. Nowadays, the communication of human with the computer is done via text. With the development of technology, human and computer communications have been conducted with a voice like communication between human beings. It provides an easy facility for many people, especially those who have special needs. Voice search technology is applied in the search of book collections in the OPAC (Online Public Access Catalog), so library visitors will find it faster and easier to find books that they need. Integration with Google is needed to convert the voice into text. To optimize the time and the results of searching, Server will download all the book data that is available in the server database. Then, the data will be converted into JSON format. In addition, the incorporation of some algorithms is conducted including Decomposition (parse) in the form of array of JSON format, the index making, analyzer to the result. It aims to make the process of searching much faster than the usual searching in OPAC because the data are directly taken to the database for every search warrant. Data Update Menu is provided with the purpose to enable users perform their own data updates and get the latest data information.

Keywords: OPAC, voice, searching, faster

Procedia PDF Downloads 318
1349 Big Data and Cardiovascular Healthcare Management: Recent Advances, Future Potential and Pitfalls

Authors: Maariyah Irfan

Abstract:

Intro: Current cardiovascular (CV) care faces challenges such as low budgets and high hospital admission rates. This review aims to evaluate Big Data in CV healthcare management through the use of wearable devices in atrial fibrillation (AF) detection. AF may present intermittently, thus it is difficult for a healthcare professional to capture and diagnose a symptomatic rhythm. Methods: The iRhythm ZioPatch, AliveCor portable electrocardiogram (ECG), and Apple Watch were chosen for review due to their involvement in controlled clinical trials, and their integration with smartphones. The cost-effectiveness and AF detection of these devices were compared against the 12-lead ambulatory ECG (Holter monitor) that the NHS currently employs for the detection of AF. Results: The Zio patch was found to detect more arrhythmic events than the Holter monitor over a 2-week period. When patients presented to the emergency department with palpitations, AliveCor portable ECGs detected 6-fold more symptomatic events compared to the standard care group over 3-months. Based off preliminary results from the Apple Heart Study, only 0.5% of participants received irregular pulse notifications from the Apple Watch. Discussion: The Zio Patch and AliveCor devices have promising potential to be implemented into the standard duty of care offered by the NHS as they compare well to current routine measures. Nonetheless, companies must address the discrepancy between their target population and current consumers as those that could benefit the most from the innovation may be left out due to cost and access.

Keywords: atrial fibrillation, big data, cardiovascular healthcare management, wearable devices

Procedia PDF Downloads 102
1348 Chemical and Biomolecular Detection at a Polarizable Electrical Interface

Authors: Nicholas Mavrogiannis, Francesca Crivellari, Zachary Gagnon

Abstract:

Development of low-cost, rapid, sensitive and portable biosensing systems are important for the detection and prevention of disease in developing countries, biowarfare/antiterrorism applications, environmental monitoring, point-of-care diagnostic testing and for basic biological research. Currently, the most established commercially available and widespread assays for portable point of care detection and disease testing are paper-based dipstick and lateral flow test strips. These paper-based devices are often small, cheap and simple to operate. The last three decades in particular have seen an emergence in these assays in diagnostic settings for detection of pregnancy, HIV/AIDS, blood glucose, Influenza, urinary protein, cardiovascular disease, respiratory infections and blood chemistries. Such assays are widely available largely because they are inexpensive, lightweight, and portable, are simple to operate, and a few platforms are capable of multiplexed detection for a small number of sample targets. However, there is a critical need for sensitive, quantitative and multiplexed detection capabilities for point-of-care diagnostics and for the detection and prevention of disease in the developing world that cannot be satisfied by current state-of-the-art paper-based assays. For example, applications including the detection of cardiac and cancer biomarkers and biothreat applications require sensitive multiplexed detection of analytes in the nM and pM range, and cannot currently be satisfied with current inexpensive portable platforms due to their lack of sensitivity, quantitative capabilities and often unreliable performance. In this talk, inexpensive label-free biomolecular detection at liquid interfaces using a newly discovered electrokinetic phenomenon known as fluidic dielectrophoresis (fDEP) is demonstrated. The electrokinetic approach involves exploiting the electrical mismatches between two aqueous liquid streams forced to flow side-by-side in a microfluidic T-channel. In this system, one fluid stream is engineered to have a higher conductivity relative to its neighbor which has a higher permittivity. When a “low” frequency (< 1 MHz) alternating current (AC) electrical field is applied normal to this fluidic electrical interface the fluid stream with high conductivity displaces into the low conductive stream. Conversely, when a “high” frequency (20MHz) AC electric field is applied, the high permittivity stream deflects across the microfluidic channel. There is, however, a critical frequency sensitive to the electrical differences between each fluid phase – the fDEP crossover frequency – between these two events where no fluid deflection is observed, and the interface remains fixed when exposed to an external field. To perform biomolecular detection, two streams flow side-by-side in a microfluidic T-channel: one fluid stream with an analyte of choice and an adjacent stream with a specific receptor to the chosen target. The two fluid streams merge and the fDEP crossover frequency is measured at different axial positions down the resulting liquid

Keywords: biodetection, fluidic dielectrophoresis, interfacial polarization, liquid interface

Procedia PDF Downloads 423
1347 Teachers' Beliefs and Practices in Designing Negotiated English Lesson Plans

Authors: Joko Nurkamto

Abstract:

A lesson plan is a part of the planning phase in a learning and teaching system framing the scenario of pedagogical activities in the classroom. It informs a decision on what to teach and how to landscape classroom interaction. Regardless of these benefits, the writer has witnessed the fact that lesson plans are viewed merely as a teaching document. Therefore, this paper will explore teachers’ beliefs and practices in designing lesson plans. It focuses primarily on how both teachers and students negotiate lesson plans in which the students are deemed to be the agents of instructional innovations. Additionally, the paper will talk about how such lesson plans are enacted. To investigate these issues, document analysis, in-depth interviews, participant classroom observation, and focus group discussion will be deployed as data collection methods in this explorative case study. The benefits of the paper are to show different roles of lesson plans and to discover different ways to design and enact such plans from a socio-interactional perspective.

Keywords: instructional innovation, learning and teaching system, lesson plan, pedagogical activities, teachers' beliefs and practices

Procedia PDF Downloads 129
1346 DocPro: A Framework for Processing Semantic and Layout Information in Business Documents

Authors: Ming-Jen Huang, Chun-Fang Huang, Chiching Wei

Abstract:

With the recent advance of the deep neural network, we observe new applications of NLP (natural language processing) and CV (computer vision) powered by deep neural networks for processing business documents. However, creating a real-world document processing system needs to integrate several NLP and CV tasks, rather than treating them separately. There is a need to have a unified approach for processing documents containing textual and graphical elements with rich formats, diverse layout arrangement, and distinct semantics. In this paper, a framework that fulfills this unified approach is presented. The framework includes a representation model definition for holding the information generated by various tasks and specifications defining the coordination between these tasks. The framework is a blueprint for building a system that can process documents with rich formats, styles, and multiple types of elements. The flexible and lightweight design of the framework can help build a system for diverse business scenarios, such as contract monitoring and reviewing.

Keywords: document processing, framework, formal definition, machine learning

Procedia PDF Downloads 186
1345 Lexical Based Method for Opinion Detection on Tripadvisor Collection

Authors: Faiza Belbachir, Thibault Schienhinski

Abstract:

The massive development of online social networks allows users to post and share their opinions on various topics. With this huge volume of opinion, it is interesting to extract and interpret these information for different domains, e.g., product and service benchmarking, politic, system of recommendation. This is why opinion detection is one of the most important research tasks. It consists on differentiating between opinion data and factual data. The difficulty of this task is to determine an approach which returns opinionated document. Generally, there are two approaches used for opinion detection i.e. Lexical based approaches and Machine Learning based approaches. In Lexical based approaches, a dictionary of sentimental words is used, words are associated with weights. The opinion score of document is derived by the occurrence of words from this dictionary. In Machine learning approaches, usually a classifier is trained using a set of annotated document containing sentiment, and features such as n-grams of words, part-of-speech tags, and logical forms. Majority of these works are based on documents text to determine opinion score but dont take into account if these texts are really correct. Thus, it is interesting to exploit other information to improve opinion detection. In our work, we will develop a new way to consider the opinion score. We introduce the notion of trust score. We determine opinionated documents but also if these opinions are really trustable information in relation with topics. For that we use lexical SentiWordNet to calculate opinion and trust scores, we compute different features about users like (numbers of their comments, numbers of their useful comments, Average useful review). After that, we combine opinion score and trust score to obtain a final score. We applied our method to detect trust opinions in TRIPADVISOR collection. Our experimental results report that the combination between opinion score and trust score improves opinion detection.

Keywords: Tripadvisor, opinion detection, SentiWordNet, trust score

Procedia PDF Downloads 166
1344 Design of a Portable Shielding System for a Newly Installed NaI(Tl) Detector

Authors: Mayesha Tahsin, A.S. Mollah

Abstract:

Recently, a 1.5x1.5 inch NaI(Tl) detector based gamma-ray spectroscopy system has been installed in the laboratory of the Nuclear Science and Engineering Department of the Military Institute of Science and Technology for radioactivity detection purposes. The newly installed NaI(Tl) detector has a circular lead shield of 22 mm width. An important consideration of any gamma-ray spectroscopy is the minimization of natural background radiation not originating from the radioactive sample that is being measured. Natural background gamma-ray radiation comes from naturally occurring or man-made radionuclides in the environment or from cosmic sources. Moreover, the main problem with this system is that it is not suitable for measurements of radioactivity with a large sample container like Petridish or Marinelli beaker geometry. When any laboratory installs a new detector or/and new shield, it “must” first carry out quality and performance tests for the detector and shield. This paper describes a new portable shielding system with lead that can reduce the background radiation. Intensity of gamma radiation after passing the shielding will be calculated using shielding equation I=Ioe-µx where Io is initial intensity of the gamma source, I is intensity after passing through the shield, µ is linear attenuation coefficient of the shielding material, and x is the thickness of the shielding material. The height and width of the shielding will be selected in order to accommodate the large sample container. The detector will be surrounded by a 4π-geometry low activity lead shield. An additional 1.5 mm thick shield of tin and 1 mm thick shield of copper covering the inner part of the lead shielding will be added in order to remove the presence of characteristic X-rays from the lead shield.

Keywords: shield, NaI (Tl) detector, gamma radiation, intensity, linear attenuation coefficient

Procedia PDF Downloads 124
1343 Document-level Sentiment Analysis: An Exploratory Case Study of Low-resource Language Urdu

Authors: Ammarah Irum, Muhammad Ali Tahir

Abstract:

Document-level sentiment analysis in Urdu is a challenging Natural Language Processing (NLP) task due to the difficulty of working with lengthy texts in a language with constrained resources. Deep learning models, which are complex neural network architectures, are well-suited to text-based applications in addition to data formats like audio, image, and video. To investigate the potential of deep learning for Urdu sentiment analysis, we implemented five different deep learning models, including Bidirectional Long Short Term Memory (BiLSTM), Convolutional Neural Network (CNN), Convolutional Neural Network with Bidirectional Long Short Term Memory (CNN-BiLSTM), and Bidirectional Encoder Representation from Transformer (BERT). In this study, we developed a hybrid deep learning model called BiLSTM-Single Layer Multi Filter Convolutional Neural Network (BiLSTM-SLMFCNN) by fusing BiLSTM and CNN architecture. The proposed and baseline techniques are applied on Urdu Customer Support data set and IMDB Urdu movie review data set by using pre-trained Urdu word embedding that are suitable for sentiment analysis at the document level. Results of these techniques are evaluated and our proposed model outperforms all other deep learning techniques for Urdu sentiment analysis. BiLSTM-SLMFCNN outperformed the baseline deep learning models and achieved 83%, 79%, 83% and 94% accuracy on small, medium and large sized IMDB Urdu movie review data set and Urdu Customer Support data set respectively.

Keywords: urdu sentiment analysis, deep learning, natural language processing, opinion mining, low-resource language

Procedia PDF Downloads 37
1342 Recurrent Neural Networks with Deep Hierarchical Mixed Structures for Chinese Document Classification

Authors: Zhaoxin Luo, Michael Zhu

Abstract:

In natural languages, there are always complex semantic hierarchies. Obtaining the feature representation based on these complex semantic hierarchies becomes the key to the success of the model. Several RNN models have recently been proposed to use latent indicators to obtain the hierarchical structure of documents. However, the model that only uses a single-layer latent indicator cannot achieve the true hierarchical structure of the language, especially a complex language like Chinese. In this paper, we propose a deep layered model that stacks arbitrarily many RNN layers equipped with latent indicators. After using EM and training it hierarchically, our model solves the computational problem of stacking RNN layers and makes it possible to stack arbitrarily many RNN layers. Our deep hierarchical model not only achieves comparable results to large pre-trained models on the Chinese short text classification problem but also achieves state of art results on the Chinese long text classification problem.

Keywords: nature language processing, recurrent neural network, hierarchical structure, document classification, Chinese

Procedia PDF Downloads 37
1341 Wave State of Self: Findings of Synchronistic Patterns in the Collective Unconscious

Authors: R. Dimitri Halley

Abstract:

The research within Jungian Psychology presented here is on the wave state of Self. What has been discovered via shared dreaming, independently correlating dreams across dreamers, is beyond the Self stage into the deepest layer or the wave state Self: the very quantum ocean, the Self archetype is embedded in. A quantum wave or rhyming of meaning constituting synergy across several dreamers was discovered in dreams and in extensively shared dream work with small groups at a post therapy stage. Within the format of shared dreaming, we find synergy patterns beyond what Jung called the Self archetype. Jung led us up to the phase of Individuation and delivered the baton to Von Franz to work out the next synchronistic stage, here proposed as the finding of the quantum patterns making up the wave state of Self. These enfolded synchronistic patterns have been found in group format of shared dreaming of individuals approximating individuation, and the unfolding of it is carried by belief and faith. The reason for this format and operating system is because beyond therapy and of living reality, we find no science – no thinking or even awareness in the therapeutic sense – but rather a state of mental processing resembling more like that of spiritual attitude. Thinking as such is linear and cannot contain the deepest layer of Self, the quantum core of the human being. It is self reflection which is the container for the process at the wave state of Self. Observation locks us in an outside-in reactive flow from a first-person perspective and hence toward the surface we see to believe, whereas here, the direction of focus shifts to inside out/intrinsic. The operating system or language at the wave level of Self is thus belief and synchronicity. Belief has up to now been almost the sole province of organized religions but was viewed by Jung as an inherent property in the process of Individuation. The shared dreaming stage of the synchronistic patterns forms a larger story constituting a deep connectivity unfolding around individual Selves. Dreams of independent dreamers form larger patterns that come together as puzzles forming a larger story, and in this sense, this group work level builds on Jung as a post individuation collective stage. Shared dream correlations will be presented, illustrating a larger story in terms of trails of shared synchronicity.

Keywords: belief, shared dreaming, synchronistic patterns, wave state of self

Procedia PDF Downloads 162