Search results for: accuracy limiting factor
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8920

Search results for: accuracy limiting factor

8590 Experiments on Weakly-Supervised Learning on Imperfect Data

Authors: Yan Cheng, Yijun Shao, James Rudolph, Charlene R. Weir, Beth Sahlmann, Qing Zeng-Treitler

Abstract:

Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning.

Keywords: weakly-supervised learning, support vector machine, prediction, delirium, simulation

Procedia PDF Downloads 170
8589 Analysis of the Effects of Vibrations on Tractor Drivers by Measurements With Wearable Sensors

Authors: Gubiani Rino, Nicola Zucchiatti, Da Broi Ugo, Bietresato Marco

Abstract:

The problem of vibrations in agriculture is very important due to the different types of machinery used for the different types of soil in which work is carried out. One of the most commonly used machines is the tractor, where the phenomenon has been studied for a long time by measuring the whole body and placing the sensor on the seat. However, this measurement system does not take into account the characteristics of the drivers, such as their body index (BMI), their gender (male, female) or the muscle fatigue they are subjected to, which is highly dependent on their age for example. The aim of the research was therefore to place sensors not only on the seat but along the spinal column to check the transmission of vibration on drivers with different BMI on different tractors and at different travel speeds and of different genders. The test was also done using wearable sensors such as a dynamometer applied to the muscles, the data of which was correlated with the vibrations produced by the tractor. Initial data show that even on new tractors with pneumatic seats, the vibrations attenuate little and are still correlated with the roughness of the track travelled and the forward speed. Another important piece of data are the root-mean square values referred to 8 hours (A(8)x,y,z) and the maximum transient vibration values (MTVVx,y,z) and, the latter, the MTVVz values were problematic (limiting factor in most cases) and always aggravated by the speed. The MTVVx values can be lowered by having a tyre-pressure adjustment system, able to properly adjust the tire pressure according to the specific situation (ground, speed) in which a tractor is operating.

Keywords: fatigue, effect vibration on health, tractor driver vibrations, vibration, muscle skeleton disorders

Procedia PDF Downloads 42
8588 Optimized Weight Selection of Control Data Based on Quotient Space of Multi-Geometric Features

Authors: Bo Wang

Abstract:

The geometric processing of multi-source remote sensing data using control data of different scale and different accuracy is an important research direction of multi-platform system for earth observation. In the existing block bundle adjustment methods, as the controlling information in the adjustment system, the approach using single observation scale and precision is unable to screen out the control information and to give reasonable and effective corresponding weights, which reduces the convergence and adjustment reliability of the results. Referring to the relevant theory and technology of quotient space, in this project, several subjects are researched. Multi-layer quotient space of multi-geometric features is constructed to describe and filter control data. Normalized granularity merging mechanism of multi-layer control information is studied and based on the normalized scale factor, the strategy to optimize the weight selection of control data which is less relevant to the adjustment system can be realized. At the same time, geometric positioning experiment is conducted using multi-source remote sensing data, aerial images, and multiclass control data to verify the theoretical research results. This research is expected to break through the cliché of the single scale and single accuracy control data in the adjustment process and expand the theory and technology of photogrammetry. Thus the problem to process multi-source remote sensing data will be solved both theoretically and practically.

Keywords: multi-source image geometric process, high precision geometric positioning, quotient space of multi-geometric features, optimized weight selection

Procedia PDF Downloads 259
8587 The Effect of Information vs. Reasoning Gap Tasks on the Frequency of Conversational Strategies and Accuracy in Speaking among Iranian Intermediate EFL Learners

Authors: Hooriya Sadr Dadras, Shiva Seyed Erfani

Abstract:

Speaking skills merit meticulous attention both on the side of the learners and the teachers. In particular, accuracy is a critical component to guarantee the messages to be conveyed through conversation because a wrongful change may adversely alter the content and purpose of the talk. Different types of tasks have served teachers to meet numerous educational objectives. Besides, negotiation of meaning and the use of different strategies have been areas of concern in socio-cultural theories of SLA. Negotiation of meaning is among the conversational processes which have a crucial role in facilitating the understanding and expression of meaning in a given second language. Conversational strategies are used during interaction when there is a breakdown in communication that leads to the interlocutor attempting to remedy the gap through talk. Therefore, this study was an attempt to investigate if there was any significant difference between the effect of reasoning gap tasks and information gap tasks on the frequency of conversational strategies used in negotiation of meaning in classrooms on one hand, and on the accuracy in speaking of Iranian intermediate EFL learners on the other. After a pilot study to check the practicality of the treatments, at the outset of the main study, the Preliminary English Test was administered to ensure the homogeneity of 87 out of 107 participants who attended the intact classes of a 15 session term in one control and two experimental groups. Also, speaking sections of PET were used as pretest and posttest to examine their speaking accuracy. The tests were recorded and transcribed to estimate the percentage of the number of the clauses with no grammatical errors in the total produced clauses to measure the speaking accuracy. In all groups, the grammatical points of accuracy were instructed and the use of conversational strategies was practiced. Then, different kinds of reasoning gap tasks (matchmaking, deciding on the course of action, and working out a time table) and information gap tasks (restoring an incomplete chart, spot the differences, arranging sentences into stories, and guessing game) were manipulated in experimental groups during treatment sessions, and the students were required to practice conversational strategies when doing speaking tasks. The conversations throughout the terms were recorded and transcribed to count the frequency of the conversational strategies used in all groups. The results of statistical analysis demonstrated that applying both the reasoning gap tasks and information gap tasks significantly affected the frequency of conversational strategies through negotiation. In the face of the improvements, the reasoning gap tasks had a more significant impact on encouraging the negotiation of meaning and increasing the number of conversational frequencies every session. The findings also indicated both task types could help learners significantly improve their speaking accuracy. Here, applying the reasoning gap tasks was more effective than the information gap tasks in improving the level of learners’ speaking accuracy.

Keywords: accuracy in speaking, conversational strategies, information gap tasks, reasoning gap tasks

Procedia PDF Downloads 288
8586 SNR Classification Using Multiple CNNs

Authors: Thinh Ngo, Paul Rad, Brian Kelley

Abstract:

Noise estimation is essential in today wireless systems for power control, adaptive modulation, interference suppression and quality of service. Deep learning (DL) has already been applied in the physical layer for modulation and signal classifications. Unacceptably low accuracy of less than 50% is found to undermine traditional application of DL classification for SNR prediction. In this paper, we use divide-and-conquer algorithm and classifier fusion method to simplify SNR classification and therefore enhances DL learning and prediction. Specifically, multiple CNNs are used for classification rather than a single CNN. Each CNN performs a binary classification of a single SNR with two labels: less than, greater than or equal. Together, multiple CNNs are combined to effectively classify over a range of SNR values from −20 ≤ SNR ≤ 32 dB.We use pre-trained CNNs to predict SNR over a wide range of joint channel parameters including multiple Doppler shifts (0, 60, 120 Hz), power-delay profiles, and signal-modulation types (QPSK,16QAM,64-QAM). The approach achieves individual SNR prediction accuracy of 92%, composite accuracy of 70% and prediction convergence one order of magnitude faster than that of traditional estimation.

Keywords: classification, CNN, deep learning, prediction, SNR

Procedia PDF Downloads 110
8585 Validity and Reliability of the Iranian Version of the Self-Expansion Questionnaire

Authors: Mehravar Javid, James Sexton, Farzaneh Amani, Kainaz Patravala

Abstract:

Self-expansion is a procedure through which people expand the dimensions of their self-concept by incorporating novel content into their sense and experience of identity. Greater self-expansion predicts positive consequences for individuals and romantic relationships. The self-expansion questionnaire (SEQ) originally developed by Lewandowski & Aron (2002) assumes that self-expansion is constituted of key components from the self-expansion model. This study aimed to confirm the factor structure of SEQ and adapt the questions of the scale to the Iranian culture. The sample included 190 participants who responded to 14 items and were selected by simple random sampling. Using Amos-21 and SPSS-21, descriptive statistics, Pearson correlation and Confirmatory Factor Analysis (CFA) were calculated. Cronbach’s alpha coefficient for total SEQ items was 0.92. Results of CFA supported the factor structure SEQ [RMSEA=0.08, GFI=0.88 and CFI=0.92] that showed the model has a good fit and also all the items of SEQ, have a high correlation and have a direct and significant relationship. So, the SEQ demonstrated acceptable psychometric properties in Tehran University students. Looking forward, it would be interesting and exciting to see the implications of the scale as applied to romantic relationships.

Keywords: validity, reliability, confirmatory factor analysis, self-expansion questionnaire

Procedia PDF Downloads 56
8584 Machine Learning for Disease Prediction Using Symptoms and X-Ray Images

Authors: Ravija Gunawardana, Banuka Athuraliya

Abstract:

Machine learning has emerged as a powerful tool for disease diagnosis and prediction. The use of machine learning algorithms has the potential to improve the accuracy of disease prediction, thereby enabling medical professionals to provide more effective and personalized treatments. This study focuses on developing a machine-learning model for disease prediction using symptoms and X-ray images. The importance of this study lies in its potential to assist medical professionals in accurately diagnosing diseases, thereby improving patient outcomes. Respiratory diseases are a significant cause of morbidity and mortality worldwide, and chest X-rays are commonly used in the diagnosis of these diseases. However, accurately interpreting X-ray images requires significant expertise and can be time-consuming, making it difficult to diagnose respiratory diseases in a timely manner. By incorporating machine learning algorithms, we can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The study utilized the Mask R-CNN algorithm, which is a state-of-the-art method for object detection and segmentation in images, to process chest X-ray images. The model was trained and tested on a large dataset of patient information, which included both symptom data and X-ray images. The performance of the model was evaluated using a range of metrics, including accuracy, precision, recall, and F1-score. The results showed that the model achieved an accuracy rate of over 90%, indicating that it was able to accurately detect and segment regions of interest in the X-ray images. In addition to X-ray images, the study also incorporated symptoms as input data for disease prediction. The study used three different classifiers, namely Random Forest, K-Nearest Neighbor and Support Vector Machine, to predict diseases based on symptoms. These classifiers were trained and tested using the same dataset of patient information as the X-ray model. The results showed promising accuracy rates for predicting diseases using symptoms, with the ensemble learning techniques significantly improving the accuracy of disease prediction. The study's findings indicate that the use of machine learning algorithms can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The model developed in this study has the potential to assist medical professionals in diagnosing respiratory diseases more accurately and efficiently. However, it is important to note that the accuracy of the model can be affected by several factors, including the quality of the X-ray images, the size of the dataset used for training, and the complexity of the disease being diagnosed. In conclusion, the study demonstrated the potential of machine learning algorithms for disease prediction using symptoms and X-ray images. The use of these algorithms can improve the accuracy of disease diagnosis, ultimately leading to better patient care. Further research is needed to validate the model's accuracy and effectiveness in a clinical setting and to expand its application to other diseases.

Keywords: K-nearest neighbor, mask R-CNN, random forest, support vector machine

Procedia PDF Downloads 113
8583 A Study on the Iterative Scheme for Stratified Shields Gamma Ray Buildup Factor Using Layer-Splitting Technique in Double-Layer Shield

Authors: Sari F. Alkhatib, Chang Je Park, Gyuhong Roh, Daeseong Jo

Abstract:

The iterative scheme which is used to treat buildup factors for stratified shields of three-layers or more is being investigated here using the layer-splitting technique. The second layer in a double-layer shield was split into two equivalent layers and the scheme was implemented on the new 'three-layer' shield configuration. The results of such manipulation for water-lead and water-iron shields combinations are presented here for 1 MeV photons. It was found that splitting the second layer introduces some deviation on the overall buildup factor. This expected deviation appeared to be higher in the case of low Z layer followed by high Z. However, the iterative scheme showed a great consistency and strong coherence with the introduced changes.

Keywords: build-up factor, iterative scheme, stratified shields, radiation protection

Procedia PDF Downloads 553
8582 Impact of a Virtual Reality-Training on Real-World Hockey Skill: An Intervention Trial

Authors: Matthew Buns

Abstract:

Training specificity is imperative for successful performance of the elite athlete. Virtual reality (VR) has been successfully applied to a broad range of training domains. However, to date there is little research investigating the use of VR for sport training. The purpose of this study was to address the question of whether virtual reality (VR) training can improve real world hockey shooting performance. Twenty four volunteers were recruited and randomly selected to complete the virtual training intervention or enter a control group with no training. Four primary types of data were collected: 1) participant’s experience with video games and hockey, 2) participant’s motivation toward video game use, 3) participants technical performance on real-world hockey, and 4) participant’s technical performance in virtual hockey. One-way multivariate analysis of variance (ANOVA) indicated that that the intervention group demonstrated significantly more real-world hockey accuracy [F(1,24) =15.43, p <.01, E.S. = 0.56] while shooting on goal than their control group counterparts [intervention M accuracy = 54.17%, SD=12.38, control M accuracy = 46.76%, SD=13.45]. One-way multivariate analysis of variance (MANOVA) repeated measures indicated significantly higher outcome scores on real-world accuracy (35.42% versus 54.17%; ES = 1.52) and velocity (51.10 mph versus 65.50 mph; ES=0.86) of hockey shooting on goal. This research supports the idea that virtual training is an effective tool for increasing real-world hockey skill.

Keywords: virtual training, hockey skills, video game, esports

Procedia PDF Downloads 124
8581 Mg Doped CuCrO₂ Thin Oxides Films for Thermoelectric Properties

Authors: I. Sinnarasa, Y. Thimont, L. Presmanes, A. Barnabé

Abstract:

The thermoelectricity is a promising technique to overcome the issues in recovering waste heat to electricity without using moving parts. In fact, the thermoelectric (TE) effect defines as the conversion of a temperature gradient directly into electricity and vice versa. To optimize TE materials, the power factor (PF = σS² where σ is electrical conductivity and S is Seebeck coefficient) must be increased by adjusting the carrier concentration, and/or the lattice thermal conductivity Kₜₕ must be reduced by introducing scattering centers with point defects, interfaces, and nanostructuration. The PF does not show the advantages of the thin film because it does not take into account the thermal conductivity. In general, the thermal conductivity of the thin film is lower than the bulk material due to their microstructure and increasing scattering effects with decreasing thickness. Delafossite type oxides CuᴵMᴵᴵᴵO₂ received main attention for their optoelectronic properties as a p-type semiconductor they exhibit also interesting thermoelectric (TE) properties due to their high electrical conductivity and their stability in room atmosphere. As there are few proper studies on the TE properties of Mg-doped CuCrO₂ thin films, we have investigated, the influence of the annealing temperature on the electrical conductivity and the Seebeck coefficient of Mg-doped CuCrO₂ thin films and calculated the PF in the temperature range from 40 °C to 220 °C. For it, we have deposited Mg-doped CuCrO₂ thin films on fused silica substrates by RF magnetron sputtering. This study was carried out on 300 nm thin films. The as-deposited Mg doped CuCrO₂ thin films have been annealed at different temperatures (from 450 to 650 °C) under primary vacuum. Electrical conductivity and Seebeck coefficient of the thin films have been measured from 40 to 220 °C. The highest electrical conductivity of 0.60 S.cm⁻¹ with a Seebeck coefficient of +329 µV.K⁻¹ at 40 °C have been obtained for the sample annealed at 550 °C. The calculated power factor of optimized CuCrO₂:Mg thin film was 6 µW.m⁻¹K⁻² at 40 °C. Due to the constant Seebeck coefficient and the increasing electrical conductivity with temperature it reached 38 µW.m⁻¹K⁻² at 220 °C that was a quite good result for an oxide thin film. Moreover, the degenerate behavior and the hopping mechanism of CuCrO₂:Mg thin film were elucidated. Their high and constant Seebeck coefficient in temperature and their stability in room atmosphere could be a great advantage for an application of this material in a high accuracy temperature measurement devices.

Keywords: thermoelectric, oxides, delafossite, thin film, power factor, degenerated semiconductor, hopping mode

Procedia PDF Downloads 177
8580 From Type-I to Type-II Fuzzy System Modeling for Diagnosis of Hepatitis

Authors: Shahabeddin Sotudian, M. H. Fazel Zarandi, I. B. Turksen

Abstract:

Hepatitis is one of the most common and dangerous diseases that affects humankind, and exposes millions of people to serious health risks every year. Diagnosis of Hepatitis has always been a challenge for physicians. This paper presents an effective method for diagnosis of hepatitis based on interval Type-II fuzzy. This proposed system includes three steps: pre-processing (feature selection), Type-I and Type-II fuzzy classification, and system evaluation. KNN-FD feature selection is used as the preprocessing step in order to exclude irrelevant features and to improve classification performance and efficiency in generating the classification model. In the fuzzy classification step, an “indirect approach” is used for fuzzy system modeling by implementing the exponential compactness and separation index for determining the number of rules in the fuzzy clustering approach. Therefore, we first proposed a Type-I fuzzy system that had an accuracy of approximately 90.9%. In the proposed system, the process of diagnosis faces vagueness and uncertainty in the final decision. Thus, the imprecise knowledge was managed by using interval Type-II fuzzy logic. The results that were obtained show that interval Type-II fuzzy has the ability to diagnose hepatitis with an average accuracy of 93.94%. The classification accuracy obtained is the highest one reached thus far. The aforementioned rate of accuracy demonstrates that the Type-II fuzzy system has a better performance in comparison to Type-I and indicates a higher capability of Type-II fuzzy system for modeling uncertainty.

Keywords: hepatitis disease, medical diagnosis, type-I fuzzy logic, type-II fuzzy logic, feature selection

Procedia PDF Downloads 280
8579 Contingency Screening Using Risk Factor Considering Transmission Line Outage

Authors: M. Marsadek, A. Mohamed

Abstract:

Power system security analysis is the most time demanding process due to large number of possible contingencies that need to be analyzed.  In a power system, any contingency resulting in security violation such as line overload or low voltage may occur for a number of reasons at any time.  To efficiently rank a contingency, both probability and the extent of security violation must be considered so as not to underestimate the risk associated with the contingency. This paper proposed a contingency ranking method that take into account the probabilistic nature of power system and the severity of contingency by using a newly developed method based on risk factor.  The proposed technique is implemented on IEEE 24-bus system.

Keywords: line overload, low voltage, probability, risk factor, severity

Procedia PDF Downloads 524
8578 Using of Particle Swarm Optimization for Loss Minimization of Vector-Controlled Induction Motors

Authors: V. Rashtchi, H. Bizhani, F. R. Tatari

Abstract:

This paper presents a new online loss minimization for an induction motor drive. Among the many loss minimization algorithms (LMAs) for an induction motor, a particle swarm optimization (PSO) has the advantages of fast response and high accuracy. However, the performance of the PSO and other optimization algorithms depend on the accuracy of the modeling of the motor drive and losses. In the development of the loss model, there is always a trade off between accuracy and complexity. This paper presents a new online optimization to determine an optimum flux level for the efficiency optimization of the vector-controlled induction motor drive. An induction motor (IM) model in d-q coordinates is referenced to the rotor magnetizing current. This transformation results in no leakage inductance on the rotor side, thus the decomposition into d-q components in the steady-state motor model can be utilized in deriving the motor loss model. The suggested algorithm is simple for implementation.

Keywords: induction machine, loss minimization, magnetizing current, particle swarm optimization

Procedia PDF Downloads 612
8577 Design an Assessment Model of Research and Development Capabilities with the New Product Development Approach: A Case Study of Iran Khodro Company

Authors: Hamid Hanifi, Adel Azar, Alireza Booshehri

Abstract:

In order to know about the capability level of R & D units in automotive industry, it is essential that organizations always compare themselves with standard level and higher than themselves so that to be improved continuously. In this research, with respect to the importance of this issue, we have tried to present an assessment model for R & D capabilities having reviewed on new products development in automotive industry of Iran. Iran Khodro Company was selected for the case study. To this purpose, first, having a review on the literature, about 200 indicators effective in R & D capabilities and new products development were extracted. Then, of these numbers, 29 indicators which were more important were selected by industry and academia experts and the questionnaire was distributed among statistical population. Statistical population was consisted of 410 individuals in Iran Khodro Company. We used the 410 questionnaires for exploratory factor analysis and then used the data of 308 questionnaires from the same population randomly for confirmatory factor analysis. The results of exploratory factor analysis led to categorization of dimensions in 9 secondary dimensions. Naming the dimensions was done according to a literature review and the professors’ opinion. Using structural equation modeling and AMOS software, confirmatory factor analysis was conducted and ultimate model with 9 secondary dimensions was confirmed. Meanwhile, 9 secondary dimensions of this research are as follows: 1) Research and design capability, 2) Customer and market capability, 3) Technology capability, 4) Financial resources capability, 5) Organizational chart, 6) Intellectual capital capability, 7) NPD process capability, 8) Managerial capability and 9) Strategy capability.

Keywords: research and development, new products development, structural equations, exploratory factor analysis, confirmatory factor analysis

Procedia PDF Downloads 313
8576 Investigating the Encouraging Factors for Scholarly Works Contribution towards Institutional Repository: A Case Study at a Malaysian University

Authors: Mohd Rashid bin Ab Hamid, Noor Azura binti Omar, Zainol Bin Mustafa

Abstract:

Purpose: The aim of this paper is to study the encouraging factors for scholarly works contribution towards among academicians at Malaysian university. Methods: This paper uses questionnaire for data collection on the respondents’ perceptional level on the institutional repository efforts in one of the university under study. Several encouraging factors have been identified and to be measured using descriptive statistics. The factors are related to content contribution, i.e. personal factor, professional factor, organizational factor and technological factor. Findings: The study found that all these four encouraging factors did have a relation to the contribution of scholarly works in the university by the academician. Research Limitations: This study used a case study and generalization to all Malaysian universities should be well taken care of. Practical implications: The library at the university should look into these four encouraging factors in order to enhance the contribution from academician towards the repository. Originality/value: This research paper provides basic information for the knowledge management officers in the university by endeavouring more efforts in order to attract more contributions.

Keywords: institutional repository, information retrieval, information storage and retrieval

Procedia PDF Downloads 541
8575 MRI Quality Control Using Texture Analysis and Spatial Metrics

Authors: Kumar Kanudkuri, A. Sandhya

Abstract:

Typically, in a MRI clinical setting, there are several protocols run, each indicated for a specific anatomy and disease condition. However, these protocols or parameters within them can change over time due to changes to the recommendations by the physician groups or updates in the software or by the availability of new technologies. Most of the time, the changes are performed by the MRI technologist to account for either time, coverage, physiological, or Specific Absorbtion Rate (SAR ) reasons. However, giving properly guidelines to MRI technologist is important so that they do not change the parameters that negatively impact the image quality. Typically a standard American College of Radiology (ACR) MRI phantom is used for Quality Control (QC) in order to guarantee that the primary objectives of MRI are met. The visual evaluation of quality depends on the operator/reviewer and might change amongst operators as well as for the same operator at various times. Therefore, overcoming these constraints is essential for a more impartial evaluation of quality. This makes quantitative estimation of image quality (IQ) metrics for MRI quality control is very important. So in order to solve this problem, we proposed that there is a need for a robust, open-source, and automated MRI image control tool. The Designed and developed an automatic analysis tool for measuring MRI image quality (IQ) metrics like Signal to Noise Ratio (SNR), Signal to Noise Ratio Uniformity (SNRU), Visual Information Fidelity (VIF), Feature Similarity (FSIM), Gray level co-occurrence matrix (GLCM), slice thickness accuracy, slice position accuracy, High contrast spatial resolution) provided good accuracy assessment. A standardized quality report has generated that incorporates metrics that impact diagnostic quality.

Keywords: ACR MRI phantom, MRI image quality metrics, SNRU, VIF, FSIM, GLCM, slice thickness accuracy, slice position accuracy

Procedia PDF Downloads 136
8574 Economic Growth and Total Factor Productivity by the Use of Rail Way Transport in the City of Medellín - Colombia in the Period 2012-2016

Authors: Mauricio Molina

Abstract:

The present research project aims to determine whether it is possible to have a statement, allowing you to have an economic model to establish clearly if the population that uses the rail system underground in the city of Medellin with an increase in productivity total factor. The present project aims to concentrate on the surroundings to the system underground for a period of 60 months in the city of Medellin. According to the review bibliographic is can establish that in it most of them cases, the cities that have with systems of transport rail are more productive. And should to its time present is an analysis that may lead to determine if effectively the use of the transport railway improves the productivity of a city and its inhabitants.

Keywords: economic growth, mobility urban, total factor productivity, rail transport

Procedia PDF Downloads 263
8573 Feeding Habits and Condition Factor of Oreochromis niloticus in Lake Alau, Northeastern Nigeria

Authors: Zahra Ali Lawan, Ali Abdulhakim

Abstract:

The stomach contents of 100 Oreochromis niloticus, sampled between April and August, 2011 in Alau Lake, northeastern Nigeria, were examined. Herbs and algae were the main contents representing 40.15%, 23.36% followed by some mud / sand components, insect parts and fish remains representing 14.60%, 13.87% and 8.03% respectively. Oreochromis niloticus was affirmed as an herbivore and a benthic feeder due to the presence of both herbs and mud/sand among its stomach content. The mean stomach fullness percentage was 70.94% and stomach emptiness was 29.06%. The average condition factor of the fishes was 1.69 with the best conditions recorded in the dry months of April and May at 1.74 and 1.94 respectively. The general trend in the condition factor for this species in this study is that relatively higher condition factors were recorded for relatively higher lengths.

Keywords: stomach contents, oreochromis niloticus, herbivores, Lake Alau

Procedia PDF Downloads 339
8572 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children

Authors: Budhvin T. Withana, Sulochana Rupasinghe

Abstract:

The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.

Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science

Procedia PDF Downloads 59
8571 Similar Script Character Recognition on Kannada and Telugu

Authors: Gurukiran Veerapur, Nytik Birudavolu, Seetharam U. N., Chandravva Hebbi, R. Praneeth Reddy

Abstract:

This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts.

Keywords: base characters, modifiers, guninthalu, aksharas, vattakshara, VAN

Procedia PDF Downloads 29
8570 A Comparison of Clinical and Pathological TNM Staging in a COVID-19 Era

Authors: Sophie Mills, Leila L. Touil, Richard Sisson

Abstract:

Introduction: The TNM classification is the global standard for the staging of head and neck cancers. Accurate clinical-radiological staging of tumours (cTNM) is essential to predict prognosis, facilitate surgical planning and determine the need for other therapeutic modalities. This study aims to determine the accuracy of pre-operative cTNM staging using pathological TNM (pTNM) and consider possible causes of TNM stage migration, noting any variation throughout the COVID-19 pandemic. Materials and Methods: A retrospective cohort study examined records of patients with surgical management of head and neck cancer at a tertiary head and neck centre from November 2019 to November 2020. Data was extracted from Somerset Cancer Registry and histopathology reports. cTNM and pTNM were compared before and during the first wave of COVID-19, as well as with other potential prognostic factors such as tumour site and tumour stage. Results: 119 cases were identified, of which 52.1% (n=62) were male, and 47.9% (n=57) were female with a mean age of 67 years. Clinical and pathological staging differed in 54.6% (n=65) of cases. Of the patients with stage migration, 40.4% (n=23) were up-staged and 59.6% (n=34) were down-staged compared with pTNM. There was no significant difference in the accuracy of cTNM staging compared with age, sex, or tumour site. There was a statistically highly significant (p < 0.001) correlation between cTNM accuracy and tumour stage, with the accuracy of cTNM staging decreasing with the advancement of pTNM staging. No statistically significant variation was noted between patients staged prior to and during COVID-19. Conclusions: Discrepancies in staging can impact management and outcomes for patients. This study found that the higher the pTNM, the more likely stage migration will occur. These findings are concordant with the oncology literature, which highlights the need to improve the accuracy of cTNM staging for more advanced tumours.

Keywords: COVID-19, head and neck cancer, stage migration, TNM staging

Procedia PDF Downloads 88
8569 The Outcome of Using Machine Learning in Medical Imaging

Authors: Adel Edwar Waheeb Louka

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery

Procedia PDF Downloads 32
8568 Challenges in the Material and Action-Resistance Factor Design for Embedded Retaining Wall Limit State Analysis

Authors: Kreso Ivandic, Filip Dodigovic, Damir Stuhec

Abstract:

The paper deals with the proposed 'Material' and 'Action-resistance factor' design methods in designing the embedded retaining walls. The parametric analysis of evaluating the differences of the output values mutually and compared with classic approach computation was performed. There is a challenge with the criteria for choosing the proposed calculation design methods in Eurocode 7 with respect to current technical regulations and regular engineering practice. The basic criterion for applying a particular design method is to ensure minimum an equal degree of reliability in relation to the current practice. The procedure of combining the relevant partial coefficients according to design methods was carried out. The use of mentioned partial coefficients should result in the same level of safety, regardless of load combinations, material characteristics and problem geometry. This proposed approach of the partial coefficients related to the material and/or action-resistance should aimed at building a bridge between calculations used so far and pure probability analysis. The measure to compare the results was to determine an equivalent safety factor for each analysis. The results show a visible wide span of equivalent values of the classic safety factors.

Keywords: action-resistance factor design, classic approach, embedded retaining wall, Eurocode 7, limit states, material factor design

Procedia PDF Downloads 211
8567 Borrower Discouragement in Spain: An Empirical Analysis Using a Survey Data Set

Authors: Ginés Hernández-Cánovas, Mª Camino Ramón-Llorens, Johanna Koëter-Kant

Abstract:

This paper uses a survey data-set of 837 Spanish SMEs to analyze the association between borrower discouragement and prior firm´s strategic decisions, while controlling for firm and owner characteristics. While existing literature has neglected factors limiting the demand for resources by an overreliance on arguments which attempt to explain the existence of discouraged borrowers solely in terms of lack of access to supply of credit. The objective of this paper is to show that factors limiting the demand for resources and, therefore, reducing the availability of funds, can be traced back to the firm manager´s decision. Our hypothesis is that managers that undertake strategic decisions seeking growth or improvement in their business performance participate more in the banking market than those showing contentment with their current business situation. Our results shows that SMEs that undertake an active role in research and development activities and that achieve improvements in the operating performance of their business are less likely to be discouraged from applying for a loan. Who needs credit and who applies for credit is important for firms, prospective lenders and policymakers interested in the financial health of these firms. Credit constrained firms are less likely to invest in R&D and to introduce new products, possibly harming long-term economic growth. Knowing how important borrower discouragement is in Europe, is important for judging the priority which should be attached to government policies aimed at reducing its effects. For example, policy makers could encourage the transparency about credit eligibility and conditions in order to reduce discouragement.

Keywords: discouragement, financial constraints, SMEs financing

Procedia PDF Downloads 331
8566 Opinion Mining to Extract Community Emotions on Covid-19 Immunization Possible Side Effects

Authors: Yahya Almurtadha, Mukhtar Ghaleb, Ahmed M. Shamsan Saleh

Abstract:

The world witnessed a fierce attack from the Covid-19 virus, which affected public life socially, economically, healthily and psychologically. The world's governments tried to confront the pandemic by imposing a number of precautionary measures such as general closure, curfews and social distancing. Scientists have also made strenuous efforts to develop an effective vaccine to train the immune system to develop antibodies to combat the virus, thus reducing its symptoms and limiting its spread. Artificial intelligence, along with researchers and medical authorities, has accelerated the vaccine development process through big data processing and simulation. On the other hand, one of the most important negatives of the impact of Covid 19 was the state of anxiety and fear due to the blowout of rumors through social media, which prompted governments to try to reassure the public with the available means. This study aims to proposed using Sentiment Analysis (AKA Opinion Mining) and deep learning as efficient artificial intelligence techniques to work on retrieving the tweets of the public from Twitter and then analyze it automatically to extract their opinions, expression and feelings, negatively or positively, about the symptoms they may feel after vaccination. Sentiment analysis is characterized by its ability to access what the public post in social media within a record time and at a lower cost than traditional means such as questionnaires and interviews, not to mention the accuracy of the information as it comes from what the public expresses voluntarily.

Keywords: deep learning, opinion mining, natural language processing, sentiment analysis

Procedia PDF Downloads 146
8565 Machine Vision System for Measuring the Quality of Bulk Sun-dried Organic Raisins

Authors: Navab Karimi, Tohid Alizadeh

Abstract:

An intelligent vision-based system was designed to measure the quality and purity of raisins. A machine vision setup was utilized to capture the images of bulk raisins in ranges of 5-50% mixed pure-impure berries. The textural features of bulk raisins were extracted using Grey-level Histograms, Co-occurrence Matrix, and Local Binary Pattern (a total of 108 features). Genetic Algorithm and neural network regression were used for selecting and ranking the best features (21 features). As a result, the GLCM features set was found to have the highest accuracy (92.4%) among the other sets. Followingly, multiple feature combinations of the previous stage were fed into the second regression (linear regression) to increase accuracy, wherein a combination of 16 features was found to be the optimum. Finally, a Support Vector Machine (SVM) classifier was used to differentiate the mixtures, producing the best efficiency and accuracy of 96.2% and 97.35%, respectively.

Keywords: sun-dried organic raisin, genetic algorithm, feature extraction, ann regression, linear regression, support vector machine, south azerbaijan.

Procedia PDF Downloads 50
8564 Small Text Extraction from Documents and Chart Images

Authors: Rominkumar Busa, Shahira K. C., Lijiya A.

Abstract:

Text recognition is an important area in computer vision which deals with detecting and recognising text from an image. The Optical Character Recognition (OCR) is a saturated area these days and with very good text recognition accuracy. However the same OCR methods when applied on text with small font sizes like the text data of chart images, the recognition rate is less than 30%. In this work, aims to extract small text in images using the deep learning model, CRNN with CTC loss. The text recognition accuracy is found to improve by applying image enhancement by super resolution prior to CRNN model. We also observe the text recognition rate further increases by 18% by applying the proposed method, which involves super resolution and character segmentation followed by CRNN with CTC loss. The efficiency of the proposed method shows that further pre-processing on chart image text and other small text images will improve the accuracy further, thereby helping text extraction from chart images.

Keywords: small text extraction, OCR, scene text recognition, CRNN

Procedia PDF Downloads 101
8563 Effect of Reinforcement Density on the Behaviour of Reinforced Sand Under a Square Footing

Authors: Dhyaalddin Bahaalddin Noori Zangana

Abstract:

This study involves the behavior of reinforced sand under a square footing. A series of bearing capacity tests were performed on a small-scale laboratory model, which filled with a poorly-graded homogenous bed of sand, which was placed in a medium dense state using sand raining technique. The sand was reinforced with 40 mm wide household aluminum foil strips. The main studied parameters was to consider the effect of reinforcing strip length, with various linear density of reinforcement, number of reinforcement layers and depth of top layer of reinforcement below the footing, on load-settlement behavior, bearing capacity ratio and settlement reduction factor. The relation of load-settlement generally showed similar trend in all the tests. Failure was defined as settlement equal to 10% of the footing width. The recommended optimum reinforcing strip length, linear density of reinforcement, number of reinforcement layers and depth of top layer of reinforcing strips that give the maximum bearing capacity improvement and minimum settlement reduction factor were presented and discussed. Different bearing capacity ration versus length of the reinforcing strips and settlement reduction factor versus length of the reinforcing strips relations at failure were showed improvement of bearing capacity ratio by a factor of 3.82 and reduction of settlement reduction factor by a factor of 0.813. The optimum length of reinforcement was found to be 7.5 times the footing width.

Keywords: square footing, relative density, linear density of reinforcement, bearing capacity ratio, load-settlement behaviour

Procedia PDF Downloads 75
8562 Characterization of Some Bread Wheat Genotypes for Drought Tolerance Using Molecular Markers

Authors: Begüm Terzi, Özlem Ateş Sönmezoğlu, Ahmet Yildirim

Abstract:

Drought is the most important factor that limiting the production and productivity of wheat in the world. The yield of wheat, which is one of the most important crop in the world, reduced depend on drought. Researches to minimize effects of drought are one of the most important about breeding of drought resistant varieties. In recent years, benefiting from the drought resistance wild species and rapid advances in molecular biology studies, researches about drought have been accelerated and number of studies were made on molecular plant breeding which included the molecular mechanisms related to drought resistance. The aim of the present study was characterization of some bread wheat lines for drought tolerance which commonly cultivated in different location of Turkey. In this study, registered 9 bread wheat varieties which on the physiological tests about drought tolerance and 10 bread wheat line has been developed by Transitional Zone Agricultural Research Institute were used. SSR, STS, RAPD and SNP markers that associated with drought tolerance were used. The polymorphisms of the markers were determined by screening of two control varieties. For these purpose 40 molecular markers were used and 12 markers of them were polymorphic among the drought tolerance and the drought sensitive varieties. Control varieties were screened using polymorphic markers. All the DNAs on the genotypes will be searched for the presence of QTLs mapped to different chromosomes. Result of the research, the studied genotypes will be grouped according to drought tolerance and will be detected drought tolerance varieties by molecular markers. In addition, the results will be compared also with physiological tests. The drought tolerant wheat genotypes may be used in breeding studies related to drought stress.

Keywords: bread wheat, drought, molecular marker, Triticum aestivum

Procedia PDF Downloads 358
8561 Quality of Age Reporting from Tanzania 2012 Census Results: An Assessment Using Whipple’s Index, Myer’s Blended Index, and Age-Sex Accuracy Index

Authors: A. Sathiya Susuman, Hamisi F. Hamisi

Abstract:

Background: Many socio-economic and demographic data are age-sex attributed. However, a variety of irregularities and misstatement are noted with respect to age-related data and less to sex data because of its biological differences between the genders. Noting the misstatement/misreporting of age data regardless of its significance importance in demographics and epidemiological studies, this study aims at assessing the quality of 2012 Tanzania Population and Housing Census Results. Methods: Data for the analysis are downloaded from Tanzania National Bureau of Statistics. Age heaping and digit preference were measured using summary indices viz., Whipple’s index, Myers’ blended index, and Age-Sex Accuracy index. Results: The recorded Whipple’s index for both sexes was 154.43; male has the lowest index of about 152.65 while female has the highest index of about 156.07. For Myers’ blended index, the preferences were at digits ‘0’ and ‘5’ while avoidance were at digits ‘1’ and ‘3’ for both sexes. Finally, Age-sex index stood at 59.8 where sex ratio score was 5.82 and age ratio scores were 20.89 and 21.4 for males and female respectively. Conclusion: The evaluation of the 2012 PHC data using the demographic techniques has qualified the data inaccurate as the results of systematic heaping and digit preferences/avoidances. Thus, innovative methods in data collection along with measuring and minimizing errors using statistical techniques should be used to ensure accuracy of age data.

Keywords: age heaping, digit preference/avoidance, summary indices, Whipple’s index, Myer’s index, age-sex accuracy index

Procedia PDF Downloads 448