Search results for: Global Accuracy Indicator (GAI)
9138 Analysis of Energy Efficiency Behavior with the Use of Train Dynamics Simulator and Statistical Tools: Case Study of Vitoria Minas Railway, Brazil
Authors: Eric Wilson Santos Cabral, Marta Monteiro Da Costa Cruz, Fabio Luis Maciel Machado, Henrique Andrade, Rodrigo Pirola Pestana, Vivian Andrea Parreira
Abstract:
The large variation in the price of diesel in Brazil directly affects the variable cost of companies operating in the transportation sector. In rail transport, the great challenge is to overcome the annual budget, cargo and ore transported with cost reduction in relation to previous years, becoming more efficient every year. Some effective measures are necessary to achieve the reduction of the liter ratio consumed by KTKB (Gross Ton per Kilometer multiplied by thousand). This acronym represents the indicator of energy efficiency of some railroads in the world. This study is divided into two parts: the first, to identify using statistical tools, part of the controlled variables in the railways, which have a correlation with the energy efficiency indicator, seeking to aid decision-making. The second, with the use of the train dynamics simulator, within scenarios defined in the operational reality of a railroad, seeks to optimize the train formations and the train stop model for the change of train drivers. With the completion of the study, companies in the rail sector are expected to be able to reduce some of their transportation costs.Keywords: railway transport, railway simulation, energy efficiency, fuel consumption
Procedia PDF Downloads 3339137 Evaluation of Spatial Distribution Prediction for Site-Scale Soil Contaminants Based on Partition Interpolation
Authors: Pengwei Qiao, Sucai Yang, Wenxia Wei
Abstract:
Soil pollution has become an important issue in China. Accurate spatial distribution prediction of pollutants with interpolation methods is the basis for soil remediation in the site. However, a relatively strong variability of pollutants would decrease the prediction accuracy. Theoretically, partition interpolation can result in accurate prediction results. In order to verify the applicability of partition interpolation for a site, benzo (b) fluoranthene (BbF) in four soil layers was adopted as the research object in this paper. IDW (inverse distance weighting)-, RBF (radial basis function)-and OK (ordinary kriging)-based partition interpolation accuracies were evaluated, and their influential factors were analyzed; then, the uncertainty and applicability of partition interpolation were determined. Three conclusions were drawn. (1) The prediction error of partitioned interpolation decreased by 70% compared to unpartitioned interpolation. (2) Partition interpolation reduced the impact of high CV (coefficient of variation) and high concentration value on the prediction accuracy. (3) The prediction accuracy of IDW-based partition interpolation was higher than that of RBF- and OK-based partition interpolation, and it was suitable for the identification of highly polluted areas at a contaminated site. These results provide a useful method to obtain relatively accurate spatial distribution information of pollutants and to identify highly polluted areas, which is important for soil pollution remediation in the site.Keywords: accuracy, applicability, partition interpolation, site, soil pollution, uncertainty
Procedia PDF Downloads 1439136 Going beyond Stakeholder Participation
Authors: Florian Engel
Abstract:
Only with a radical change to an intrinsically motivated project team, through giving the employees the freedom for autonomy, mastery and purpose, it is then possible to develop excellent products. With these changes, combined with using a rapid application development approach, the group of users serves as an important indicator to test the market needs, rather than only as the stakeholders for requirements.Keywords: intrinsic motivation, requirements elicitation, self-directed work, stakeholder participation
Procedia PDF Downloads 3419135 Oil Producing Wells Using a Technique of Gas Lift on Prosper Software
Authors: Nikhil Yadav, Shubham Verma
Abstract:
Gas lift is a common technique used to optimize oil production in wells. Prosper software is a powerful tool for modeling and optimizing gas lift systems in oil wells. This review paper examines the effectiveness of Prosper software in optimizing gas lift systems in oil-producing wells. The literature review identified several studies that demonstrated the use of Prosper software to adjust injection rate, depth, and valve characteristics to optimize gas lift system performance. The results showed that Prosper software can significantly improve production rates and reduce operating costs in oil-producing wells. However, the accuracy of the model depends on the accuracy of the input data, and the cost of Prosper software can be high. Therefore, further research is needed to improve the accuracy of the model and evaluate the cost-effectiveness of using Prosper software in gas lift system optimizationKeywords: gas lift, prosper software, injection rate, operating costs, oil-producing wells
Procedia PDF Downloads 869134 Unlocking Green Hydrogen Potential: A Machine Learning-Based Assessment
Authors: Said Alshukri, Mazhar Hussain Malik
Abstract:
Green hydrogen is hydrogen produced using renewable energy sources. In the last few years, Oman aimed to reduce its dependency on fossil fuels. Recently, the hydrogen economy has become a global trend, and many countries have started to investigate the feasibility of implementing this sector. Oman created an alliance to establish the policy and rules for this sector. With motivation coming from both global and local interest in green hydrogen, this paper investigates the potential of producing hydrogen from wind and solar energies in three different locations in Oman, namely Duqm, Salalah, and Sohar. By using machine learning-based software “WEKA” and local metrological data, the project was designed to figure out which location has the highest wind and solar energy potential. First, various supervised models were tested to obtain their prediction accuracy, and it was found that the Random Forest (RF) model has the best prediction performance. The RF model was applied to 2021 metrological data for each location, and the results indicated that Duqm has the highest wind and solar energy potential. The system of one wind turbine in Duqm can produce 8335 MWh/year, which could be utilized in the water electrolysis process to produce 88847 kg of hydrogen mass, while a solar system consisting of 2820 solar cells is estimated to produce 1666.223 MWh/ year which is capable of producing 177591 kg of hydrogen mass.Keywords: green hydrogen, machine learning, wind and solar energies, WEKA, supervised models, random forest
Procedia PDF Downloads 789133 Conflict and Hunger Revisit: Evidences from Global Surveys, 1989-2020
Authors: Manasse Elusma, Thung-Hong Lin, Chun-yin Lee
Abstract:
The relationship between hunger and war or conflict remains to be discussed. Do wars or conflicts cause hunger and food scarcity, or is the reverse relationship is true? As the world becomes more peaceful and wealthier, some countries are still suffered from hunger and food shortage. So, eradicating hunger calls for a more comprehensive understanding of the relationship between conflict and hunger. Several studies are carried out to detect the importance of conflict or war on food security. Most of these studies, however, perform only descriptive analysis and largely use food security indicators instead of the global hunger index. Few studies have employed cross-country panel data to explicitly analyze the association between conflict and chronic hunger, including hidden hunger. Herein, this study addresses this issue and the knowledge gap. We combine global datasets to build a new panel dataset including 143 countries from 1989 to 2020. This study examines the effect of conflict on hunger with fixed effect models, and the results show that the increase of conflict frequency deteriorates hunger. Peacebuilding efforts and war prevention initiative are required to eradicate global hunger.Keywords: armed conflict, food scarcity, hidden hunger, hunger, malnutrition
Procedia PDF Downloads 1719132 A Study of Permission-Based Malware Detection Using Machine Learning
Authors: Ratun Rahman, Rafid Islam, Akin Ahmed, Kamrul Hasan, Hasan Mahmud
Abstract:
Malware is becoming more prevalent, and several threat categories have risen dramatically in recent years. This paper provides a bird's-eye view of the world of malware analysis. The efficiency of five different machine learning methods (Naive Bayes, K-Nearest Neighbor, Decision Tree, Random Forest, and TensorFlow Decision Forest) combined with features picked from the retrieval of Android permissions to categorize applications as harmful or benign is investigated in this study. The test set consists of 1,168 samples (among these android applications, 602 are malware and 566 are benign applications), each consisting of 948 features (permissions). Using the permission-based dataset, the machine learning algorithms then produce accuracy rates above 80%, except the Naive Bayes Algorithm with 65% accuracy. Of the considered algorithms TensorFlow Decision Forest performed the best with an accuracy of 90%.Keywords: android malware detection, machine learning, malware, malware analysis
Procedia PDF Downloads 1659131 Shark Detection and Classification with Deep Learning
Authors: Jeremy Jenrette, Z. Y. C. Liu, Pranav Chimote, Edward Fox, Trevor Hastie, Francesco Ferretti
Abstract:
Suitable shark conservation depends on well-informed population assessments. Direct methods such as scientific surveys and fisheries monitoring are adequate for defining population statuses, but species-specific indices of abundance and distribution coming from these sources are rare for most shark species. We can rapidly fill these information gaps by boosting media-based remote monitoring efforts with machine learning and automation. We created a database of shark images by sourcing 24,546 images covering 219 species of sharks from the web application spark pulse and the social network Instagram. We used object detection to extract shark features and inflate this database to 53,345 images. We packaged object-detection and image classification models into a Shark Detector bundle. We developed the Shark Detector to recognize and classify sharks from videos and images using transfer learning and convolutional neural networks (CNNs). We applied these models to common data-generation approaches of sharks: boosting training datasets, processing baited remote camera footage and online videos, and data-mining Instagram. We examined the accuracy of each model and tested genus and species prediction correctness as a result of training data quantity. The Shark Detector located sharks in baited remote footage and YouTube videos with an average accuracy of 89\%, and classified located subjects to the species level with 69\% accuracy (n =\ eight species). The Shark Detector sorted heterogeneous datasets of images sourced from Instagram with 91\% accuracy and classified species with 70\% accuracy (n =\ 17 species). Data-mining Instagram can inflate training datasets and increase the Shark Detector’s accuracy as well as facilitate archiving of historical and novel shark observations. Base accuracy of genus prediction was 68\% across 25 genera. The average base accuracy of species prediction within each genus class was 85\%. The Shark Detector can classify 45 species. All data-generation methods were processed without manual interaction. As media-based remote monitoring strives to dominate methods for observing sharks in nature, we developed an open-source Shark Detector to facilitate common identification applications. Prediction accuracy of the software pipeline increases as more images are added to the training dataset. We provide public access to the software on our GitHub page.Keywords: classification, data mining, Instagram, remote monitoring, sharks
Procedia PDF Downloads 1209130 Behavior of Clay effect on Electrical Parameter of Reservoir Rock Using Global Hydraulic Elements (GHEs) Approach
Authors: Noreddin Mousa
Abstract:
The main objective of this study is to estimate which type of clay minerals that more effect on saturation exponent using Global Hydraulic Elements (GHEs) approach to estimating the distribution of saturation exponent factor. Two wells and seven core samples have been selected from various (GHEs) for detailed study. There are many factors affecting saturation exponent such as wettability, grain pattern pressure of certain authigenic clays, which may promote oil wet characteristics of history of fluid displacement. The saturation exponent is related to the texture and affected by wettability and clay minerals. Capillary pressure (mercury injection) has been used to confirm GHEs which are selected to define rock types; the porous plate method is used to derive the saturation exponent in the laboratory. The petrography is very important in order to study the mineralogy and texture. In this study the results showing excellent relation between saturation exponent and the type of clay minerals which was observed that the Global Hydraulic Elements GHE-2 and GHE-5 which are containing Chlorite is more affect on saturation exponent comparing with the other GHE’s.Keywords: GHEs, wettability, global hydraulic elements, petrography
Procedia PDF Downloads 2999129 Random Forest Classification for Population Segmentation
Authors: Regina Chua
Abstract:
To reduce the costs of re-fielding a large survey, a Random Forest classifier was applied to measure the accuracy of classifying individuals into their assigned segments with the fewest possible questions. Given a long survey, one needed to determine the most predictive ten or fewer questions that would accurately assign new individuals to custom segments. Furthermore, the solution needed to be quick in its classification and usable in non-Python environments. In this paper, a supervised Random Forest classifier was modeled on a dataset with 7,000 individuals, 60 questions, and 254 features. The Random Forest consisted of an iterative collection of individual decision trees that result in a predicted segment with robust precision and recall scores compared to a single tree. A random 70-30 stratified sampling for training the algorithm was used, and accuracy trade-offs at different depths for each segment were identified. Ultimately, the Random Forest classifier performed at 87% accuracy at a depth of 10 with 20 instead of 254 features and 10 instead of 60 questions. With an acceptable accuracy in prioritizing feature selection, new tools were developed for non-Python environments: a worksheet with a formulaic version of the algorithm and an embedded function to predict the segment of an individual in real-time. Random Forest was determined to be an optimal classification model by its feature selection, performance, processing speed, and flexible application in other environments.Keywords: machine learning, supervised learning, data science, random forest, classification, prediction, predictive modeling
Procedia PDF Downloads 919128 Experiments on Weakly-Supervised Learning on Imperfect Data
Authors: Yan Cheng, Yijun Shao, James Rudolph, Charlene R. Weir, Beth Sahlmann, Qing Zeng-Treitler
Abstract:
Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning.Keywords: weakly-supervised learning, support vector machine, prediction, delirium, simulation
Procedia PDF Downloads 1979127 Methodology and Credibility of Unmanned Aerial Vehicle-Based Cadastral Mapping
Authors: Ajibola Isola, Shattri Mansor, Ojogbane Sani, Olugbemi Tope
Abstract:
The cadastral map is the rationale behind city management planning and development. For years, cadastral maps have been produced by ground and photogrammetry platforms. Recent evolution in photogrammetry and remote sensing sensors ignites the use of Unmanned Aerial Vehicle systems (UAVs) for cadastral mapping. Despite the time-saving and multi-dimensional cost-effectiveness of the UAV platform, issues related to cadastral map accuracy are a hindrance to the wide applicability of UAVs' cadastral mapping. This study aims to present an approach leading to the generation and assessing the credibility of UAV cadastral mapping. Different sets of Red, Green, and Blue (RGB) photos were obtained from the Tarot 680-hexacopter UAV platform flown over the Universiti Putra Malaysia campus sports complex at an altitude range of 70 m, 100 m, and 250. Before flying the UAV, twenty-eight ground control points were evenly established in the study area with a real-time kinematic differential global positioning system. The second phase of the study utilizes an image-matching algorithm for photos alignment wherein camera calibration parameters and ten of the established ground control points were used for estimating the inner, relative, and absolute orientations of the photos. The resulting orthoimages are exported to ArcGIS software for digitization. Visual, tabular, and graphical assessments of the resulting cadastral maps showed a different level of accuracy. The results of the study show a gradual approach for generating UAV cadastral mapping and that the cadastral map acquired at 70 m altitude produced better results.Keywords: aerial mapping, orthomosaic, cadastral map, flying altitude, image processing
Procedia PDF Downloads 799126 Estimating CO₂ Storage Capacity under Geological Uncertainty Using 3D Geological Modeling of Unconventional Reservoir Rocks in Block nv32, Shenvsi Oilfield, China
Authors: Ayman Mutahar Alrassas, Shaoran Ren, Renyuan Ren, Hung Vo Thanh, Mohammed Hail Hakimi, Zhenliang Guan
Abstract:
The significant effect of CO₂ on global climate and the environment has gained more concern worldwide. Enhance oil recovery (EOR) associated with sequestration of CO₂ particularly into the depleted oil reservoir is considered the viable approach under financial limitations since it improves the oil recovery from the existing oil reservoir and boosts the relation between global-scale of CO₂ capture and geological sequestration. Consequently, practical measurements are required to attain large-scale CO₂ emission reduction. This paper presents an integrated modeling workflow to construct an accurate 3D reservoir geological model to estimate the storage capacity of CO₂ under geological uncertainty in an unconventional oil reservoir of the Paleogene Shahejie Formation (Es1) in the block Nv32, Shenvsi oilfield, China. In this regard, geophysical data, including well logs of twenty-two well locations and seismic data, were combined with geological and engineering data and used to construct a 3D reservoir geological modeling. The geological modeling focused on four tight reservoir units of the Shahejie Formation (Es1-x1, Es1-x2, Es1-x3, and Es1-x4). The validated 3D reservoir models were subsequently used to calculate the theoretical CO₂ storage capacity in the block Nv32, Shenvsi oilfield. Well logs were utilized to predict petrophysical properties such as porosity and permeability, and lithofacies and indicate that the Es1 reservoir units are mainly sandstone, shale, and limestone with a proportion of 38.09%, 32.42%, and 29.49, respectively. Well log-based petrophysical results also show that the Es1 reservoir units generally exhibit 2–36% porosity, 0.017 mD to 974.8 mD permeability, and moderate to good net to gross ratios. These estimated values of porosity, permeability, lithofacies, and net to gross were up-scaled and distributed laterally using Sequential Gaussian Simulation (SGS) and Simulation Sequential Indicator (SIS) methods to generate 3D reservoir geological models. The reservoir geological models show there are lateral heterogeneities of the reservoir properties and lithofacies, and the best reservoir rocks exist in the Es1-x4, Es1-x3, and Es1-x2 units, respectively. In addition, the reservoir volumetric of the Es1 units in block Nv32 was also estimated based on the petrophysical property models and fund to be between 0.554368Keywords: CO₂ storage capacity, 3D geological model, geological uncertainty, unconventional oil reservoir, block Nv32
Procedia PDF Downloads 1759125 Vocational Education: A Synergy for Skills Acquisition and Global Learning in Colleges of Education in Ogun State, Nigeria
Authors: Raimi, Kehinde Olawuyi, Omoare Ayodeji Motunrayo
Abstract:
In the last two decades, there has been rising youth unemployment, restiveness, and social vices in Nigeria. The relevance of Vocational Education for skills acquisition, global learning, and national development to address these problems cannot be underestimated. Thus, the need to economically empower Nigerian youths to be able to develop the nation and meet up in the ever-changing global learning and economy led to the assessment of Vocational Education as Synergy for the Skills Acquisition and Global Learning in Ogun State, Nigeria. One hundred and twenty out of 1,500 students were randomly selected for this study. Data were obtained through a questionnaire and were analyzed with descriptive statistics and Chi-square. The results of the study showed that 59.2% of the respondents were between 20 – 24 years of age, 60.8% were male, and 65.8% had a keen interest in Vocational Education. Also, 90% of the respondents acquired skills in extension/advisory, 78.3% acquired skills in poultry production, and 69.1% acquired skills in fisheries/aquaculture. The major constraints to Vocational Education are inadequate resource personnel (χ² = 10.25, p = 0.02), inadequate training facilities (x̅ = 2.46) and unstable power supply (x̅ = 2.38). Results of Chi-square showed significance association between constraints and Skills Acquisition (χ² = 12.54, p = 0.00) at p < 0.05 level of significance. It was established that Vocational Education significantly contributed to students’ skills acquisition and global learning. This study, therefore, recommends that inadequate personnel should be looked into by the school authority in order not to over-stretch the available staff of the institution while the provision of alternative stable power supply (solar power) is also essential for effective teaching and learning process.Keywords: vocational education, skills acquisition, national development, global learning
Procedia PDF Downloads 1259124 Compact Optical Sensors for Harsh Environments
Authors: Branislav Timotijevic, Yves Petremand, Markus Luetzelschwab, Dara Bayat, Laurent Aebi
Abstract:
Optical miniaturized sensors with remote readout are required devices for the monitoring in harsh electromagnetic environments. As an example, in turbo and hydro generators, excessively high vibrations of the end-windings can lead to dramatic damages, imposing very high, additional service costs. A significant change of the generator temperature can also be an indicator of the system failure. Continuous monitoring of vibrations, temperature, humidity, and gases is therefore mandatory. The high electromagnetic fields in the generators impose the use of non-conductive devices in order to prevent electromagnetic interferences and to electrically isolate the sensing element to the electronic readout. Metal-free sensors are good candidates for such systems since they are immune to very strong electromagnetic fields and given the fact that they are non-conductive. We have realized miniature optical accelerometer and temperature sensors for a remote sensing of the harsh environments using the common, inexpensive silicon Micro Electro-Mechanical System (MEMS) platform. Both devices show highly linear response. The accelerometer has a deviation within 1% from the linear fit when tested in a range 0 – 40 g. The temperature sensor can provide the measurement accuracy better than 1 °C in a range 20 – 150 °C. The design of other type of sensors for the environments with high electromagnetic interferences has also been discussed.Keywords: optical MEMS, temperature sensor, accelerometer, remote sensing, harsh environment
Procedia PDF Downloads 3649123 The Effect of Information vs. Reasoning Gap Tasks on the Frequency of Conversational Strategies and Accuracy in Speaking among Iranian Intermediate EFL Learners
Authors: Hooriya Sadr Dadras, Shiva Seyed Erfani
Abstract:
Speaking skills merit meticulous attention both on the side of the learners and the teachers. In particular, accuracy is a critical component to guarantee the messages to be conveyed through conversation because a wrongful change may adversely alter the content and purpose of the talk. Different types of tasks have served teachers to meet numerous educational objectives. Besides, negotiation of meaning and the use of different strategies have been areas of concern in socio-cultural theories of SLA. Negotiation of meaning is among the conversational processes which have a crucial role in facilitating the understanding and expression of meaning in a given second language. Conversational strategies are used during interaction when there is a breakdown in communication that leads to the interlocutor attempting to remedy the gap through talk. Therefore, this study was an attempt to investigate if there was any significant difference between the effect of reasoning gap tasks and information gap tasks on the frequency of conversational strategies used in negotiation of meaning in classrooms on one hand, and on the accuracy in speaking of Iranian intermediate EFL learners on the other. After a pilot study to check the practicality of the treatments, at the outset of the main study, the Preliminary English Test was administered to ensure the homogeneity of 87 out of 107 participants who attended the intact classes of a 15 session term in one control and two experimental groups. Also, speaking sections of PET were used as pretest and posttest to examine their speaking accuracy. The tests were recorded and transcribed to estimate the percentage of the number of the clauses with no grammatical errors in the total produced clauses to measure the speaking accuracy. In all groups, the grammatical points of accuracy were instructed and the use of conversational strategies was practiced. Then, different kinds of reasoning gap tasks (matchmaking, deciding on the course of action, and working out a time table) and information gap tasks (restoring an incomplete chart, spot the differences, arranging sentences into stories, and guessing game) were manipulated in experimental groups during treatment sessions, and the students were required to practice conversational strategies when doing speaking tasks. The conversations throughout the terms were recorded and transcribed to count the frequency of the conversational strategies used in all groups. The results of statistical analysis demonstrated that applying both the reasoning gap tasks and information gap tasks significantly affected the frequency of conversational strategies through negotiation. In the face of the improvements, the reasoning gap tasks had a more significant impact on encouraging the negotiation of meaning and increasing the number of conversational frequencies every session. The findings also indicated both task types could help learners significantly improve their speaking accuracy. Here, applying the reasoning gap tasks was more effective than the information gap tasks in improving the level of learners’ speaking accuracy.Keywords: accuracy in speaking, conversational strategies, information gap tasks, reasoning gap tasks
Procedia PDF Downloads 3089122 SNR Classification Using Multiple CNNs
Authors: Thinh Ngo, Paul Rad, Brian Kelley
Abstract:
Noise estimation is essential in today wireless systems for power control, adaptive modulation, interference suppression and quality of service. Deep learning (DL) has already been applied in the physical layer for modulation and signal classifications. Unacceptably low accuracy of less than 50% is found to undermine traditional application of DL classification for SNR prediction. In this paper, we use divide-and-conquer algorithm and classifier fusion method to simplify SNR classification and therefore enhances DL learning and prediction. Specifically, multiple CNNs are used for classification rather than a single CNN. Each CNN performs a binary classification of a single SNR with two labels: less than, greater than or equal. Together, multiple CNNs are combined to effectively classify over a range of SNR values from −20 ≤ SNR ≤ 32 dB.We use pre-trained CNNs to predict SNR over a wide range of joint channel parameters including multiple Doppler shifts (0, 60, 120 Hz), power-delay profiles, and signal-modulation types (QPSK,16QAM,64-QAM). The approach achieves individual SNR prediction accuracy of 92%, composite accuracy of 70% and prediction convergence one order of magnitude faster than that of traditional estimation.Keywords: classification, CNN, deep learning, prediction, SNR
Procedia PDF Downloads 1329121 Machine Learning for Disease Prediction Using Symptoms and X-Ray Images
Authors: Ravija Gunawardana, Banuka Athuraliya
Abstract:
Machine learning has emerged as a powerful tool for disease diagnosis and prediction. The use of machine learning algorithms has the potential to improve the accuracy of disease prediction, thereby enabling medical professionals to provide more effective and personalized treatments. This study focuses on developing a machine-learning model for disease prediction using symptoms and X-ray images. The importance of this study lies in its potential to assist medical professionals in accurately diagnosing diseases, thereby improving patient outcomes. Respiratory diseases are a significant cause of morbidity and mortality worldwide, and chest X-rays are commonly used in the diagnosis of these diseases. However, accurately interpreting X-ray images requires significant expertise and can be time-consuming, making it difficult to diagnose respiratory diseases in a timely manner. By incorporating machine learning algorithms, we can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The study utilized the Mask R-CNN algorithm, which is a state-of-the-art method for object detection and segmentation in images, to process chest X-ray images. The model was trained and tested on a large dataset of patient information, which included both symptom data and X-ray images. The performance of the model was evaluated using a range of metrics, including accuracy, precision, recall, and F1-score. The results showed that the model achieved an accuracy rate of over 90%, indicating that it was able to accurately detect and segment regions of interest in the X-ray images. In addition to X-ray images, the study also incorporated symptoms as input data for disease prediction. The study used three different classifiers, namely Random Forest, K-Nearest Neighbor and Support Vector Machine, to predict diseases based on symptoms. These classifiers were trained and tested using the same dataset of patient information as the X-ray model. The results showed promising accuracy rates for predicting diseases using symptoms, with the ensemble learning techniques significantly improving the accuracy of disease prediction. The study's findings indicate that the use of machine learning algorithms can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The model developed in this study has the potential to assist medical professionals in diagnosing respiratory diseases more accurately and efficiently. However, it is important to note that the accuracy of the model can be affected by several factors, including the quality of the X-ray images, the size of the dataset used for training, and the complexity of the disease being diagnosed. In conclusion, the study demonstrated the potential of machine learning algorithms for disease prediction using symptoms and X-ray images. The use of these algorithms can improve the accuracy of disease diagnosis, ultimately leading to better patient care. Further research is needed to validate the model's accuracy and effectiveness in a clinical setting and to expand its application to other diseases.Keywords: K-nearest neighbor, mask R-CNN, random forest, support vector machine
Procedia PDF Downloads 1509120 Impact of a Virtual Reality-Training on Real-World Hockey Skill: An Intervention Trial
Authors: Matthew Buns
Abstract:
Training specificity is imperative for successful performance of the elite athlete. Virtual reality (VR) has been successfully applied to a broad range of training domains. However, to date there is little research investigating the use of VR for sport training. The purpose of this study was to address the question of whether virtual reality (VR) training can improve real world hockey shooting performance. Twenty four volunteers were recruited and randomly selected to complete the virtual training intervention or enter a control group with no training. Four primary types of data were collected: 1) participant’s experience with video games and hockey, 2) participant’s motivation toward video game use, 3) participants technical performance on real-world hockey, and 4) participant’s technical performance in virtual hockey. One-way multivariate analysis of variance (ANOVA) indicated that that the intervention group demonstrated significantly more real-world hockey accuracy [F(1,24) =15.43, p <.01, E.S. = 0.56] while shooting on goal than their control group counterparts [intervention M accuracy = 54.17%, SD=12.38, control M accuracy = 46.76%, SD=13.45]. One-way multivariate analysis of variance (MANOVA) repeated measures indicated significantly higher outcome scores on real-world accuracy (35.42% versus 54.17%; ES = 1.52) and velocity (51.10 mph versus 65.50 mph; ES=0.86) of hockey shooting on goal. This research supports the idea that virtual training is an effective tool for increasing real-world hockey skill.Keywords: virtual training, hockey skills, video game, esports
Procedia PDF Downloads 1469119 Evaluation of Green Logistics Performance: An Application of Analytic Hierarchy Process Method for Ranking Environmental Indicators
Authors: Eduarda Dutra De Souza, Gabriela Hammes, Marina Bouzon, Carlos M. Taboada Rodriguez
Abstract:
The search for minimizing harmful impacts on the environment has become the focus of global society, affecting mainly how to manage organizations. Thus, companies have sought to transform their activities into environmentally friendly initiatives by applying green practices throughout their supply chains. In the logistics domain, the implementation of environmentally sound practices is still in its infancy in emerging countries such as Brazil. Given the need to reduce these environmental damages, this study aims to evaluate the performance of green logistics (GL) in the plastics industry sector in order to help to improve environmental performance within organizations and reduce the impact caused by their activities. The performance tool was based on theoretical research and the use of experts in the field. The Analytic Hierarchy Process (AHP) was used to prioritize green practices and assign weight to the indicators contained in the proposed tool. The tool also allows the co-production of a single indicator. The developed tool was applied in an industry of the plastic packaging sector. However, this tool may be applied in different industry sectors, and it is adaptable to different sizes of companies. Besides the contributions to the literature, this work also presents future paths of research in the field of green logistics.Keywords: AHP, green logistics, green supply chain, performance evaluation
Procedia PDF Downloads 1589118 From Type-I to Type-II Fuzzy System Modeling for Diagnosis of Hepatitis
Authors: Shahabeddin Sotudian, M. H. Fazel Zarandi, I. B. Turksen
Abstract:
Hepatitis is one of the most common and dangerous diseases that affects humankind, and exposes millions of people to serious health risks every year. Diagnosis of Hepatitis has always been a challenge for physicians. This paper presents an effective method for diagnosis of hepatitis based on interval Type-II fuzzy. This proposed system includes three steps: pre-processing (feature selection), Type-I and Type-II fuzzy classification, and system evaluation. KNN-FD feature selection is used as the preprocessing step in order to exclude irrelevant features and to improve classification performance and efficiency in generating the classification model. In the fuzzy classification step, an “indirect approach” is used for fuzzy system modeling by implementing the exponential compactness and separation index for determining the number of rules in the fuzzy clustering approach. Therefore, we first proposed a Type-I fuzzy system that had an accuracy of approximately 90.9%. In the proposed system, the process of diagnosis faces vagueness and uncertainty in the final decision. Thus, the imprecise knowledge was managed by using interval Type-II fuzzy logic. The results that were obtained show that interval Type-II fuzzy has the ability to diagnose hepatitis with an average accuracy of 93.94%. The classification accuracy obtained is the highest one reached thus far. The aforementioned rate of accuracy demonstrates that the Type-II fuzzy system has a better performance in comparison to Type-I and indicates a higher capability of Type-II fuzzy system for modeling uncertainty.Keywords: hepatitis disease, medical diagnosis, type-I fuzzy logic, type-II fuzzy logic, feature selection
Procedia PDF Downloads 3059117 Using of Particle Swarm Optimization for Loss Minimization of Vector-Controlled Induction Motors
Authors: V. Rashtchi, H. Bizhani, F. R. Tatari
Abstract:
This paper presents a new online loss minimization for an induction motor drive. Among the many loss minimization algorithms (LMAs) for an induction motor, a particle swarm optimization (PSO) has the advantages of fast response and high accuracy. However, the performance of the PSO and other optimization algorithms depend on the accuracy of the modeling of the motor drive and losses. In the development of the loss model, there is always a trade off between accuracy and complexity. This paper presents a new online optimization to determine an optimum flux level for the efficiency optimization of the vector-controlled induction motor drive. An induction motor (IM) model in d-q coordinates is referenced to the rotor magnetizing current. This transformation results in no leakage inductance on the rotor side, thus the decomposition into d-q components in the steady-state motor model can be utilized in deriving the motor loss model. The suggested algorithm is simple for implementation.Keywords: induction machine, loss minimization, magnetizing current, particle swarm optimization
Procedia PDF Downloads 6309116 MRI Quality Control Using Texture Analysis and Spatial Metrics
Authors: Kumar Kanudkuri, A. Sandhya
Abstract:
Typically, in a MRI clinical setting, there are several protocols run, each indicated for a specific anatomy and disease condition. However, these protocols or parameters within them can change over time due to changes to the recommendations by the physician groups or updates in the software or by the availability of new technologies. Most of the time, the changes are performed by the MRI technologist to account for either time, coverage, physiological, or Specific Absorbtion Rate (SAR ) reasons. However, giving properly guidelines to MRI technologist is important so that they do not change the parameters that negatively impact the image quality. Typically a standard American College of Radiology (ACR) MRI phantom is used for Quality Control (QC) in order to guarantee that the primary objectives of MRI are met. The visual evaluation of quality depends on the operator/reviewer and might change amongst operators as well as for the same operator at various times. Therefore, overcoming these constraints is essential for a more impartial evaluation of quality. This makes quantitative estimation of image quality (IQ) metrics for MRI quality control is very important. So in order to solve this problem, we proposed that there is a need for a robust, open-source, and automated MRI image control tool. The Designed and developed an automatic analysis tool for measuring MRI image quality (IQ) metrics like Signal to Noise Ratio (SNR), Signal to Noise Ratio Uniformity (SNRU), Visual Information Fidelity (VIF), Feature Similarity (FSIM), Gray level co-occurrence matrix (GLCM), slice thickness accuracy, slice position accuracy, High contrast spatial resolution) provided good accuracy assessment. A standardized quality report has generated that incorporates metrics that impact diagnostic quality.Keywords: ACR MRI phantom, MRI image quality metrics, SNRU, VIF, FSIM, GLCM, slice thickness accuracy, slice position accuracy
Procedia PDF Downloads 1689115 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children
Authors: Budhvin T. Withana, Sulochana Rupasinghe
Abstract:
The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science
Procedia PDF Downloads 1129114 A Composite Beam Element Based on Global-Local Superposition Theory for Prediction of Delamination in Composite Laminates
Authors: Charles Mota Possatti Júnior, André Schwanz de Lima, Maurício Vicente Donadon, Alfredo Rocha de Faria
Abstract:
An interlaminar damage model is combined with a beam element formulation based on global-local superposition to assess delamination in composite laminates. The variations in the mechanical properties in the laminate, generated by the presence of delamination, are calculated as a function of the displacements in the interface layers. The global-local superposition of displacement fields ensures the zig-zag behaviour of stresses and displacement, and the number of degrees of freedom (DOFs) is independent of the number of layers. The displacements and stresses are calculated as a function of DOFs commonly used in traditional beam elements. Finally, the finite element(FE) formulation is extended to handle cases of different thicknesses, and then the FE model predictions are compared with results obtained from analytical solutions and commercial finite element codes.Keywords: delamination, global-local superposition theory, single beam element, zig-zag, interlaminar damage model
Procedia PDF Downloads 1169113 Face Recognition Using Discrete Orthogonal Hahn Moments
Authors: Fatima Akhmedova, Simon Liao
Abstract:
One of the most critical decision points in the design of a face recognition system is the choice of an appropriate face representation. Effective feature descriptors are expected to convey sufficient, invariant and non-redundant facial information. In this work, we propose a set of Hahn moments as a new approach for feature description. Hahn moments have been widely used in image analysis due to their invariance, non-redundancy and the ability to extract features either globally and locally. To assess the applicability of Hahn moments to Face Recognition we conduct two experiments on the Olivetti Research Laboratory (ORL) database and University of Notre-Dame (UND) X1 biometric collection. Fusion of the global features along with the features from local facial regions are used as an input for the conventional k-NN classifier. The method reaches an accuracy of 93% of correctly recognized subjects for the ORL database and 94% for the UND database.Keywords: face recognition, Hahn moments, recognition-by-parts, time-lapse
Procedia PDF Downloads 3749112 Similar Script Character Recognition on Kannada and Telugu
Authors: Gurukiran Veerapur, Nytik Birudavolu, Seetharam U. N., Chandravva Hebbi, R. Praneeth Reddy
Abstract:
This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts.Keywords: base characters, modifiers, guninthalu, aksharas, vattakshara, VAN
Procedia PDF Downloads 519111 Role of Pulp Volume Method in Assessment of Age and Gender in Lucknow, India, an Observational Study
Authors: Anurag Tripathi, Sanad Khandelwal
Abstract:
Age and gender determination are required in forensic for victim identification. There is secondary dentine deposition throughout life, resulting in decreased pulp volume and size. Evaluation of pulp volume using Cone Beam Computed Tomography (CBCT)is a noninvasive method to evaluate the age and gender of an individual. The study was done to evaluate the efficacy of pulp volume method in the determination of age and gender.Aims/Objectives: The study was conducted to estimate age and determine sex by measuring tooth pulp volume with the help of CBCT. An observational study of one year duration on CBCT data of individuals was conducted in Lucknow. Maxillary central incisors (CI) and maxillary canine (C) of the randomly selected samples were assessed for measurement of pulp volume using a software. Statistical analysis: Chi Square Test, Arithmetic Mean, Standard deviation, Pearson’s Correlation, Linear & Logistic regression analysis. Results: The CBCT data of Ninety individuals with age range between 18-70 years was evaluated for pulp volume of central incisor and canine (CI & C). The Pearson correlation coefficient between the tooth pulp volume (CI & C) and chronological age suggested that pulp volume decreased with age. The validation of the equations for sex determination showed higher prediction accuracy for CI (56.70%) and lower for C (53.30%).Conclusion: Pulp volume obtained from CBCT is a reliable indicator for age estimation and gender prediction.Keywords: forensic, dental age, pulp volume, cone beam computed tomography
Procedia PDF Downloads 979110 The Outcome of Using Machine Learning in Medical Imaging
Authors: Adel Edwar Waheeb Louka
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deeplearning, image processing, machine learningSarapin, intraarticular, chronic knee pain, osteoarthritisFNS, trauma, hip, neck femur fracture, minimally invasive surgery
Procedia PDF Downloads 729109 Impact of International Student Mobility on European and Global Identity: A Case Study of Switzerland
Authors: Karina Oborune
Abstract:
International student mobility involves a unique spatio-temporal context and exploring the various aspects of mobile students’ experience can lead to new findings within identity studies. The previous studies have mainly focused on student mobility within Europe and its impact on European identity arguing that students who participate in intra-European mobility already feel European before exchange. Contrary to previous studies, in this paper student mobility is analyzed from different point of view. In order to see whether a true Europeanization of identities is taking place, it is necessary to contrast European identity with alternative supranational identity which could similarly result from student mobility and in particular a global identity. Besides, in the paper there is explored whether geographical constellation (host country continental location during mobility- Europe vs. outside of Europe) plays a role. Based on newly developed model of multicultural, social and socio-demographic variables there is argued that after intra-European mobility only global identity of students could be increased (H1), but the mobility to countries outside of Europe causes changes in European identity (H2). The quantitative study (survey, n=1440, 22 higher education institutions, experimental group of former and future/potential mobile students and control group of non-mobile students) was held in Switzerland where is equally high number of students who participate in intra-European and outside of Europe mobility. The results of multivariate linear regression showed that students who participate in exchange in Europe increase their European identity due to having close friends from Europe, as well as due to length of the mobility experience had impact, but students who participate in exchange outside of Europe increase their global identity due to having close friends from outside of Europe and proficiency in foreign languages.Keywords: student mobility, European identity, global identity, global identity
Procedia PDF Downloads 728