Search results for: inventory classification
2478 Local Directional Encoded Derivative Binary Pattern Based Coral Image Classification Using Weighted Distance Gray Wolf Optimization Algorithm
Authors: Annalakshmi G., Sakthivel Murugan S.
Abstract:
This paper presents a local directional encoded derivative binary pattern (LDEDBP) feature extraction method that can be applied for the classification of submarine coral reef images. The classification of coral reef images using texture features is difficult due to the dissimilarities in class samples. In coral reef image classification, texture features are extracted using the proposed method called local directional encoded derivative binary pattern (LDEDBP). The proposed approach extracts the complete structural arrangement of the local region using local binary batten (LBP) and also extracts the edge information using local directional pattern (LDP) from the edge response available in a particular region, thereby achieving extra discriminative feature value. Typically the LDP extracts the edge details in all eight directions. The process of integrating edge responses along with the local binary pattern achieves a more robust texture descriptor than the other descriptors used in texture feature extraction methods. Finally, the proposed technique is applied to an extreme learning machine (ELM) method with a meta-heuristic algorithm known as weighted distance grey wolf optimizer (GWO) to optimize the input weight and biases of single-hidden-layer feed-forward neural networks (SLFN). In the empirical results, ELM-WDGWO demonstrated their better performance in terms of accuracy on all coral datasets, namely RSMAS, EILAT, EILAT2, and MLC, compared with other state-of-the-art algorithms. The proposed method achieves the highest overall classification accuracy of 94% compared to the other state of art methods.Keywords: feature extraction, local directional pattern, ELM classifier, GWO optimization
Procedia PDF Downloads 1632477 Kannada HandWritten Character Recognition by Edge Hinge and Edge Distribution Techniques Using Manhatan and Minimum Distance Classifiers
Authors: C. V. Aravinda, H. N. Prakash
Abstract:
In this paper, we tried to convey fusion and state of art pertaining to SIL character recognition systems. In the first step, the text is preprocessed and normalized to perform the text identification correctly. The second step involves extracting relevant and informative features. The third step implements the classification decision. The three stages which involved are Data acquisition and preprocessing, Feature extraction, and Classification. Here we concentrated on two techniques to obtain features, Feature Extraction & Feature Selection. Edge-hinge distribution is a feature that characterizes the changes in direction of a script stroke in handwritten text. The edge-hinge distribution is extracted by means of a windowpane that is slid over an edge-detected binary handwriting image. Whenever the mid pixel of the window is on, the two edge fragments (i.e. connected sequences of pixels) emerging from this mid pixel are measured. Their directions are measured and stored as pairs. A joint probability distribution is obtained from a large sample of such pairs. Despite continuous effort, handwriting identification remains a challenging issue, due to different approaches use different varieties of features, having different. Therefore, our study will focus on handwriting recognition based on feature selection to simplify features extracting task, optimize classification system complexity, reduce running time and improve the classification accuracy.Keywords: word segmentation and recognition, character recognition, optical character recognition, hand written character recognition, South Indian languages
Procedia PDF Downloads 4942476 Music Genre Classification Based on Non-Negative Matrix Factorization Features
Authors: Soyon Kim, Edward Kim
Abstract:
In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480.Keywords: mel-frequency cepstral coefficient (MFCC), music genre classification, non-negative matrix factorization (NMF), support vector machine (SVM)
Procedia PDF Downloads 3032475 Decision Making System for Clinical Datasets
Authors: P. Bharathiraja
Abstract:
Computer Aided decision making system is used to enhance diagnosis and prognosis of diseases and also to assist clinicians and junior doctors in clinical decision making. Medical Data used for decision making should be definite and consistent. Data Mining and soft computing techniques are used for cleaning the data and for incorporating human reasoning in decision making systems. Fuzzy rule based inference technique can be used for classification in order to incorporate human reasoning in the decision making process. In this work, missing values are imputed using the mean or mode of the attribute. The data are normalized using min-ma normalization to improve the design and efficiency of the fuzzy inference system. The fuzzy inference system is used to handle the uncertainties that exist in the medical data. Equal-width-partitioning is used to partition the attribute values into appropriate fuzzy intervals. Fuzzy rules are generated using Class Based Associative rule mining algorithm. The system is trained and tested using heart disease data set from the University of California at Irvine (UCI) Machine Learning Repository. The data was split using a hold out approach into training and testing data. From the experimental results it can be inferred that classification using fuzzy inference system performs better than trivial IF-THEN rule based classification approaches. Furthermore it is observed that the use of fuzzy logic and fuzzy inference mechanism handles uncertainty and also resembles human decision making. The system can be used in the absence of a clinical expert to assist junior doctors and clinicians in clinical decision making.Keywords: decision making, data mining, normalization, fuzzy rule, classification
Procedia PDF Downloads 5172474 Dual-Channel Reliable Breast Ultrasound Image Classification Based on Explainable Attribution and Uncertainty Quantification
Authors: Haonan Hu, Shuge Lei, Dasheng Sun, Huabin Zhang, Kehong Yuan, Jian Dai, Jijun Tang
Abstract:
This paper focuses on the classification task of breast ultrasound images and conducts research on the reliability measurement of classification results. A dual-channel evaluation framework was developed based on the proposed inference reliability and predictive reliability scores. For the inference reliability evaluation, human-aligned and doctor-agreed inference rationals based on the improved feature attribution algorithm SP-RISA are gracefully applied. Uncertainty quantification is used to evaluate the predictive reliability via the test time enhancement. The effectiveness of this reliability evaluation framework has been verified on the breast ultrasound clinical dataset YBUS, and its robustness is verified on the public dataset BUSI. The expected calibration errors on both datasets are significantly lower than traditional evaluation methods, which proves the effectiveness of the proposed reliability measurement.Keywords: medical imaging, ultrasound imaging, XAI, uncertainty measurement, trustworthy AI
Procedia PDF Downloads 1012473 A Multi-Output Network with U-Net Enhanced Class Activation Map and Robust Classification Performance for Medical Imaging Analysis
Authors: Jaiden Xuan Schraut, Leon Liu, Yiqiao Yin
Abstract:
Computer vision in medical diagnosis has achieved a high level of success in diagnosing diseases with high accuracy. However, conventional classifiers that produce an image to-label result provides insufficient information for medical professionals to judge and raise concerns over the trust and reliability of a model with results that cannot be explained. In order to gain local insight into cancerous regions, separate tasks such as imaging segmentation need to be implemented to aid the doctors in treating patients, which doubles the training time and costs which renders the diagnosis system inefficient and difficult to be accepted by the public. To tackle this issue and drive AI-first medical solutions further, this paper proposes a multi-output network that follows a U-Net architecture for image segmentation output and features an additional convolutional neural networks (CNN) module for auxiliary classification output. Class activation maps are a method of providing insight into a convolutional neural network’s feature maps that leads to its classification but in the case of lung diseases, the region of interest is enhanced by U-net-assisted Class Activation Map (CAM) visualization. Therefore, our proposed model combines image segmentation models and classifiers to crop out only the lung region of a chest X-ray’s class activation map to provide a visualization that improves the explainability and is able to generate classification results simultaneously which builds trust for AI-led diagnosis systems. The proposed U-Net model achieves 97.61% accuracy and a dice coefficient of 0.97 on testing data from the COVID-QU-Ex Dataset which includes both diseased and healthy lungs.Keywords: multi-output network model, U-net, class activation map, image classification, medical imaging analysis
Procedia PDF Downloads 2022472 Availability Analysis of Process Management in the Equipment Maintenance and Repair Implementation
Authors: Onur Ozveri, Korkut Karabag, Cagri Keles
Abstract:
It is an important issue that the occurring of production downtime and repair costs when machines fail in the machine intensive production industries. In the case of failure of more than one machine at the same time, which machines will have the priority to repair, how to determine the optimal repair time should be allotted for this machines and how to plan the resources needed to repair are the key issues. In recent years, Business Process Management (BPM) technique, bring effective solutions to different problems in business. The main feature of this technique is that it can improve the way the job done by examining in detail the works of interest. In the industries, maintenance and repair works are operating as a process and when a breakdown occurs, it is known that the repair work is carried out in a series of process. Maintenance main-process and repair sub-process are evaluated with process management technique, so it is thought that structure could bring a solution. For this reason, in an international manufacturing company, this issue discussed and has tried to develop a proposal for a solution. The purpose of this study is the implementation of maintenance and repair works which is integrated with process management technique and at the end of implementation, analyzing the maintenance related parameters like quality, cost, time, safety and spare part. The international firm that carried out the application operates in a free region in Turkey and its core business area is producing original equipment technologies, vehicle electrical construction, electronics, safety and thermal systems for the world's leading light and heavy vehicle manufacturers. In the firm primarily, a project team has been established. The team dealt with the current maintenance process again, and it has been revised again by the process management techniques. Repair process which is sub-process of maintenance process has been discussed again. In the improved processes, the ABC equipment classification technique was used to decide which machine or machines will be given priority in case of failure. This technique is a prioritization method of malfunctioned machine based on the effect of the production, product quality, maintenance costs and job security. Improved maintenance and repair processes have been implemented in the company for three months, and the obtained data were compared with the previous year data. In conclusion, breakdown maintenance was found to occur in a shorter time, with lower cost and lower spare parts inventory.Keywords: ABC equipment classification, business process management (BPM), maintenance, repair performance
Procedia PDF Downloads 1942471 The effect of Reflective Thinking on Iranian EFL Learners’ Language Learning Strategy Use, L2 Proficiency, and Beliefs about Second Language Learning and Teaching
Authors: Mohammad Hadi Mahmoodi, Mojtaba Farahani
Abstract:
The present study aimed at investigating whether reflective thinking differentiates Iranian EFL learners regarding language learning strategy use, beliefs about language learning and teaching, and L2 proficiency. To this end, the researcher adopted a mixed method approach. First, 94 EFL learners were asked to complete Reflective Thinking Questionnaire (Kember et al., 2000), Beliefs about Language Learning and Teaching Inventory (Horwitz, 1985), Strategy Inventory for Language Learning (Oxford, 1990), and Oxford Quick Placement Test. The results of three separate one-way ANOVAs indicated that reflective thinking significantly differentiates Iranian EFL learners concerning: (a)language learning strategy use, (b) beliefs about language learning and teaching, and (c) general language proficiency. Furthermore, to see where the differences lay, three separate post-hoc Tukey tests were run the results of which showed that learners with different levels of reflectivity (high, mid, and low) were significantly different from each other in all three dependent variables. Finally, to increase the validity of the findings thirty of the participants were interviewed and the results were analyzed through template organizing style method (Crabtree & Miller, 1999). The results of the interview analysis supported the results of quantitative data analysis.Keywords: reflective thinking, language learning strategy use, beliefs toward language learning and teaching
Procedia PDF Downloads 6552470 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 1672469 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanismsKeywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 1592468 Fear of Childbirth According to Parity
Authors: Ozlem Karabulutlu, Kiymet Yesilcicek Calik, Nazli Akar
Abstract:
Objectives: To examine fear of childbirth according to parity, gestational age, prenatal education, and obstetric history. Methods: The study was performed as a questionnaire design in a State Hospital in Kars, Turkey with 403 unselected pregnant who were recruited from the delivery unit. The data were collected via 3 questionnaires; the first with sociodemographic and obstetric features, the second with Wijma Delivery Expectance/Experience Questionnaire (W-DEQ) scale, and the third with the scale of Beck Anxiety Inventory (BAI). Results: The W-DEQ and BAI scores were higher in nulliparous than multiparous woman (W-DEQ 67.08±28.33, 59.87±26.91, P=0.039<0.05, BAI 18.97±9.5, 16.65±11.83, P=0.0009<0.05 respectively). Moreover, W-DEQ and BAI scores of pregnant whose gestational week was ≤37 / ≥41 and who didn’t receive training and had vaginal delivery was higher than those whose gestational week was 38-40 weeks and who received prenatal training and had cesarean delivery (W-DEQ 67.54±29.20, 56.44±22.59, 69.72±25.53 p<0.05, BAI 21.41±9.07; 15.77±11.20, 18.36±10.57 p<0.05 respectively). Both in nulliparous and multiparous, as W-DEQ score increases BAI score increases too (r=0.256; p=0.000<0.05). Conclusions: Severe fear of childbirth and anxiety was more common in nulliparous women, preterm and post-term pregnancy and who did not receive prenatal training and had vaginal delivery.Keywords: Beck Anxiety Inventory (BAI), fear of birth, parity, pregnant women, Wijma Delivery Expectance/Experience Questionnaire (W-DEQ)
Procedia PDF Downloads 2892467 Acupuncture in the Treatment of Parkinson's Disease-Related Fatigue: A Pilot Randomized, Controlled Study
Authors: Keng H. Kong, Louis C. Tan, Wing L. Aw, Kay Y. Tay
Abstract:
Background: Fatigue is a common problem in patients with Parkinson's disease, with reported prevalence of up to 70%. Fatigue can be disabling and has adverse effects on patients' quality of life. There is currently no satisfactory treatment of fatigue. Acupuncture is effective in the treatment of fatigue, especially that related to cancer. Its role in Parkinson's disease-related fatigue is uncertain. Aims: To evaluate the clinical efficacy of acupuncture treatment in Parkinson's disease-related fatigue. Hypothesis: We hypothesize that acupuncture is effective in alleviating Parkinson's disease-related fatigue. Design: A single center, randomized, controlled study with two parallel arms. Participants: Forty participants with idiopathic Parkinson's disease will be enrolled. Interventions: Participants will be randomized to receive verum (real) acupuncture or placebo acupuncture. The retractable non-invasive sham needle will be used in the placebo group. The intervention will be administered twice a week for five weeks. Main outcome measures: The primary outcome will be the change in general fatigue score of the multidimensional fatigue inventory at week 5. Secondary outcome measures include other subscales of the multidimensional fatigue inventory, movement disorders society-unified Parkinson's disease rating scale, Parkinson's disease questionnaire-39 and geriatric depression scale. All outcome measures will be assessed at baseline (week 0), completion of intervention (week 5) and 4 weeks after completion of intervention (week 9). Results: To date, 23 participants have been recruited and nine have completed the study. The mean age is 63.5±14.2 years, mean duration of Parkinson’s disease is 6.4±1.8 years and mean MDS-UPDRS score is 8.3±2.8. The mean general fatigue score of the multidimensional fatigue inventory is 13.5±4.6. No significant adverse event related to acupuncture is noted. Potential significance: If the results are as expected, this study will provide preliminary scientific evidence for the efficacy of acupuncture in Parkinson's Disease-related fatigue, and opens the door for a larger multicentre trial to be performed. In the longer term, it may lead to the integration of acupuncture in the care of patients with Parkinson's disease.Keywords: acupuncture, fatigue, Parkinson's disease, trial
Procedia PDF Downloads 3062466 Surface Hole Defect Detection of Rolled Sheets Based on Pixel Classification Approach
Authors: Samira Taleb, Sakina Aoun, Slimane Ziani, Zoheir Mentouri, Adel Boudiaf
Abstract:
Rolling is a pressure treatment technique that modifies the shape of steel ingots or billets between rotating rollers. During this process, defects may form on the surface of the rolled sheets and are likely to affect the performance and quality of the finished product. In our study, we developed a method for detecting surface hole defects using a pixel classification approach. This work includes several steps. First, we performed image preprocessing to delimit areas with and without hole defects on the sheet image. Then, we developed the histograms of each area to generate the gray level membership intervals of the pixels that characterize each area. As we noticed an intersection between the characteristics of the gray level intervals of the images of the two areas, we finally performed a learning step based on a series of detection tests to refine the membership intervals of each area, and to choose the defect detection criterion in order to optimize the recognition of the surface hole.Keywords: classification, defect, surface, detection, hole
Procedia PDF Downloads 152465 Classification of EEG Signals Based on Dynamic Connectivity Analysis
Authors: Zoran Šverko, Saša Vlahinić, Nino Stojković, Ivan Markovinović
Abstract:
In this article, the classification of target letters is performed using data from the EEG P300 Speller paradigm. Neural networks trained with the results of dynamic connectivity analysis between different brain regions are used for classification. Dynamic connectivity analysis is based on the adaptive window size and the imaginary part of the complex Pearson correlation coefficient. Brain dynamics are analysed using the relative intersection of confidence intervals for the imaginary component of the complex Pearson correlation coefficient method (RICI-imCPCC). The RICI-imCPCC method overcomes the shortcomings of currently used dynamical connectivity analysis methods, such as the low reliability and low temporal precision for short connectivity intervals encountered in constant sliding window analysis with wide window size and the high susceptibility to noise encountered in constant sliding window analysis with narrow window size. This method overcomes these shortcomings by dynamically adjusting the window size using the RICI rule. This method extracts information about brain connections for each time sample. Seventy percent of the extracted brain connectivity information is used for training and thirty percent for validation. Classification of the target word is also done and based on the same analysis method. As far as we know, through this research, we have shown for the first time that dynamic connectivity can be used as a parameter for classifying EEG signals.Keywords: dynamic connectivity analysis, EEG, neural networks, Pearson correlation coefficients
Procedia PDF Downloads 2142464 Accuracy Analysis of the American Society of Anesthesiologists Classification Using ChatGPT
Authors: Jae Ni Jang, Young Uk Kim
Abstract:
Background: Chat Generative Pre-training Transformer-3 (ChatGPT; San Francisco, California, Open Artificial Intelligence) is an artificial intelligence chatbot based on a large language model designed to generate human-like text. As the usage of ChatGPT is increasing among less knowledgeable patients, medical students, and anesthesia and pain medicine residents or trainees, we aimed to evaluate the accuracy of ChatGPT-3 responses to questions about the American Society of Anesthesiologists (ASA) classification based on patients’ underlying diseases and assess the quality of the generated responses. Methods: A total of 47 questions were submitted to ChatGPT using textual prompts. The questions were designed for ChatGPT-3 to provide answers regarding ASA classification in response to common underlying diseases frequently observed in adult patients. In addition, we created 18 questions regarding the ASA classification for pediatric patients and pregnant women. The accuracy of ChatGPT’s responses was evaluated by cross-referencing with Miller’s Anesthesia, Morgan & Mikhail’s Clinical Anesthesiology, and the American Society of Anesthesiologists’ ASA Physical Status Classification System (2020). Results: Out of the 47 questions pertaining to adults, ChatGPT -3 provided correct answers for only 23, resulting in an accuracy rate of 48.9%. Furthermore, the responses provided by ChatGPT-3 regarding children and pregnant women were mostly inaccurate, as indicated by a 28% accuracy rate (5 out of 18). Conclusions: ChatGPT provided correct responses to questions relevant to the daily clinical routine of anesthesiologists in approximately half of the cases, while the remaining responses contained errors. Therefore, caution is advised when using ChatGPT to retrieve anesthesia-related information. Although ChatGPT may not yet be suitable for clinical settings, we anticipate significant improvements in ChatGPT and other large language models in the near future. Regular assessments of ChatGPT's ASA classification accuracy are essential due to the evolving nature of ChatGPT as an artificial intelligence entity. This is especially important because ChatGPT has a clinically unacceptable rate of error and hallucination, particularly in pediatric patients and pregnant women. The methodology established in this study may be used to continue evaluating ChatGPT.Keywords: American Society of Anesthesiologists, artificial intelligence, Chat Generative Pre-training Transformer-3, ChatGPT
Procedia PDF Downloads 472463 Diversity of Arachnological Fauna in an Agricultural Environment: Inventory and Effect of Herbicides
Authors: Benslimane Marwa, Benabbas-Sahki Ilham
Abstract:
Spiders play an important role in agroecosystems due to their great abundance. They are considered a valuable group of invertebrates in agricultural land. They are predators of insects harmful to crops, but their use in biological control requires in-depth research on their ecology. During our study, we counted a total of 768 spiders, which we were able to identify and classify into 14 families over a period between March 2021 and October of the same year. This study aims to compare a station subjected to agricultural practices, including the spreading of herbicides, with another station subjected to the same practices but without the use of phytosanitary products. The inventory shows a strong dominance of the Gnaphosidae family (75.8%). This result affirms that the proliferation of this family is very favorable to the knowledge of the fruits by limiting the populations of aphids infesting the plot, which can therefore be proposed for biological control. The comparative study of the populations of spiders in the stations studied shows the negative effect of agricultural practices on the species richness and abundance of these species; as for the diversity, this one is only slightly affected. Finally, we can note that the effects of herbicides did not cause a significant imbalance in this agroecosystem, unlike plowing, which showed harmful consequences on spiders.Keywords: spiders, predator, species richness, herbicides, agricultural practices
Procedia PDF Downloads 922462 The Impact on the Composition of Survey Refusals΄ Demographic Profile When Implementing Different Classifications
Authors: Eva Tsouparopoulou, Maria Symeonaki
Abstract:
The internationally documented declining survey response rates of the last two decades are mainly attributed to refusals. In fieldwork, a refusal may be obtained not only from the respondent himself/herself, but from other sources on the respondent’s behalf, such as other household members, apartment building residents or administrator(s), and neighborhood residents. In this paper, we investigate how the composition of the demographic profile of survey refusals changes when different classifications are implemented and the classification issues arising from that. The analysis is based on the 2002-2018 European Social Survey (ESS) datasets for Belgium, Germany, and United Kingdom. For these three countries, the size of selected sample units coded as a type of refusal for all nine under investigation rounds was large enough to meet the purposes of the analysis. The results indicate the existence of four different possible classifications that can be implemented and the significance of choosing the one that strengthens the contrasts of the different types of respondents' demographic profiles. Since the foundation of social quantitative research lies in the triptych of definition, classification, and measurement, this study aims to identify the multiplicity of the definition of survey refusals as a methodological tool for the continually growing research on non-response.Keywords: non-response, refusals, European social survey, classification
Procedia PDF Downloads 852461 Disease Level Assessment in Wheat Plots Using a Residual Deep Learning Algorithm
Authors: Felipe A. Guth, Shane Ward, Kevin McDonnell
Abstract:
The assessment of disease levels in crop fields is an important and time-consuming task that generally relies on expert knowledge of trained individuals. Image classification in agriculture problems historically has been based on classical machine learning strategies that make use of hand-engineered features in the top of a classification algorithm. This approach tends to not produce results with high accuracy and generalization to the classes classified by the system when the nature of the elements has a significant variability. The advent of deep convolutional neural networks has revolutionized the field of machine learning, especially in computer vision tasks. These networks have great resourcefulness of learning and have been applied successfully to image classification and object detection tasks in the last years. The objective of this work was to propose a new method based on deep learning convolutional neural networks towards the task of disease level monitoring. Common RGB images of winter wheat were obtained during a growing season. Five categories of disease levels presence were produced, in collaboration with agronomists, for the algorithm classification. Disease level tasks performed by experts provided ground truth data for the disease score of the same winter wheat plots were RGB images were acquired. The system had an overall accuracy of 84% on the discrimination of the disease level classes.Keywords: crop disease assessment, deep learning, precision agriculture, residual neural networks
Procedia PDF Downloads 3312460 Impact of Working Capital Management Strategies on Firm's Value and Profitability
Authors: Jonghae Park, Daesung Kim
Abstract:
The impact of aggressive and conservative working capital‘s strategies on the value and profitability of the firms has been evaluated by applying the panel data regression analysis. The control variables used in the regression models are natural log of firm size, sales growth, and debt. We collected a panel of 13,988 companies listed on the Korea stock market covering the period 2000-2016. The major findings of this study are as follow: 1) We find a significant negative correlation between firm profitability and the number of days inventory (INV) and days accounts payable (AP). The firm’s profitability can also be improved by reducing the number of days of inventory and days accounts payable. 2) We also find a significant positive correlation between firm profitability and the number of days accounts receivable (AR) and cash ratios (CR). In other words, the cash is associated with high corporate profitability. 3) Tobin's analysis showed that only the number of days accounts receivable (AR) and cash ratios (CR) had a significant relationship. In conclusion, companies can increase profitability by reducing INV and increasing AP, but INV and AP did not affect corporate value. In particular, it is necessary to increase CA and decrease AR in order to increase Firm’s profitability and value.Keywords: working capital, working capital management, firm value, profitability
Procedia PDF Downloads 1892459 Population Dynamics and Land Use/Land Cover Change on the Chilalo-Galama Mountain Range, Ethiopia
Authors: Yusuf Jundi Sado
Abstract:
Changes in land use are mostly credited to human actions that result in negative impacts on biodiversity and ecosystem functions. This study aims to analyze the dynamics of land use and land cover changes for sustainable natural resources planning and management. Chilalo-Galama Mountain Range, Ethiopia. This study used Thematic Mapper 05 (TM) for 1986, 2001 and Landsat 8 (OLI) data 2017. Additionally, data from the Central Statistics Agency on human population growth were analyzed. Semi-Automatic classification plugin (SCP) in QGIS 3.2.3 software was used for image classification. Global positioning system, field observations and focus group discussions were used for ground verification. Land Use Land Cover (LU/LC) change analysis was using maximum likelihood supervised classification and changes were calculated for the 1986–2001 and the 2001–2017 and 1986-2017 periods. The results show that agricultural land increased from 27.85% (1986) to 44.43% and 51.32% in 2001 and 2017, respectively with the overall accuracies of 92% (1986), 90.36% (2001), and 88% (2017). On the other hand, forests decreased from 8.51% (1986) to 7.64 (2001) and 4.46% (2017), and grassland decreased from 37.47% (1986) to 15.22%, and 15.01% in 2001 and 2017, respectively. It indicates for the years 1986–2017 the largest area cover gain of agricultural land was obtained from grassland. The matrix also shows that shrubland gained land from agricultural land, afro-alpine, and forest land. Population dynamics is found to be one of the major driving forces for the LU/LU changes in the study area.Keywords: Landsat, LU/LC change, Semi-Automatic classification plugin, population dynamics, Ethiopia
Procedia PDF Downloads 852458 Clinical Feature Analysis and Prediction on Recurrence in Cervical Cancer
Authors: Ravinder Bahl, Jamini Sharma
Abstract:
The paper demonstrates analysis of the cervical cancer based on a probabilistic model. It involves technique for classification and prediction by recognizing typical and diagnostically most important test features relating to cervical cancer. The main contributions of the research include predicting the probability of recurrences in no recurrence (first time detection) cases. The combination of the conventional statistical and machine learning tools is applied for the analysis. Experimental study with real data demonstrates the feasibility and potential of the proposed approach for the said cause.Keywords: cervical cancer, recurrence, no recurrence, probabilistic, classification, prediction, machine learning
Procedia PDF Downloads 3602457 A Machine Learning Approach for Classification of Directional Valve Leakage in the Hydraulic Final Test
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Due to increasing cost pressure in global markets, artificial intelligence is becoming a technology that is decisive for competition. Predictive quality enables machinery and plant manufacturers to ensure product quality by using data-driven forecasts via machine learning models as a decision-making basis for test results. The use of cross-process Bosch production data along the value chain of hydraulic valves is a promising approach to classifying the quality characteristics of workpieces.Keywords: predictive quality, hydraulics, machine learning, classification, supervised learning
Procedia PDF Downloads 2302456 Time-Frequency Feature Extraction Method Based on Micro-Doppler Signature of Ground Moving Targets
Authors: Ke Ren, Huiruo Shi, Linsen Li, Baoshuai Wang, Yu Zhou
Abstract:
Since some discriminative features are required for ground moving targets classification, we propose a new feature extraction method based on micro-Doppler signature. Firstly, the time-frequency analysis of measured data indicates that the time-frequency spectrograms of the three kinds of ground moving targets, i.e., single walking person, two people walking and a moving wheeled vehicle, are discriminative. Then, a three-dimensional time-frequency feature vector is extracted from the time-frequency spectrograms to depict these differences. At last, a Support Vector Machine (SVM) classifier is trained with the proposed three-dimensional feature vector. The classification accuracy to categorize ground moving targets into the three kinds of the measured data is found to be over 96%, which demonstrates the good discriminative ability of the proposed micro-Doppler feature.Keywords: micro-doppler, time-frequency analysis, feature extraction, radar target classification
Procedia PDF Downloads 4052455 Clustering the Wheat Seeds Using SOM Artificial Neural Networks
Authors: Salah Ghamari
Abstract:
In this study, the ability of self organizing map artificial (SOM) neural networks in clustering the wheat seeds varieties according to morphological properties of them was considered. The SOM is one type of unsupervised competitive learning. Experimentally, five morphological features of 300 seeds (including three varieties: gaskozhen, Md and sardari) were obtained using image processing technique. The results show that the artificial neural network has a good performance (90.33% accuracy) in classification of the wheat varieties despite of high similarity in them. The highest classification accuracy (100%) was achieved for sardari.Keywords: artificial neural networks, clustering, self organizing map, wheat variety
Procedia PDF Downloads 6562454 An Optimal Algorithm for Finding (R, Q) Policy in a Price-Dependent Order Quantity Inventory System with Soft Budget Constraint
Authors: S. Hamid Mirmohammadi, Shahrazad Tamjidzad
Abstract:
This paper is concerned with the single-item continuous review inventory system in which demand is stochastic and discrete. The budget consumed for purchasing the ordered items is not restricted but it incurs extra cost when exceeding specific value. The unit purchasing price depends on the quantity ordered under the all-units discounts cost structure. In many actual systems, the budget as a resource which is occupied by the purchased items is limited and the system is able to confront the resource shortage by charging more costs. Thus, considering the resource shortage costs as a part of system costs, especially when the amount of resource occupied by the purchased item is influenced by quantity discounts, is well motivated by practical concerns. In this paper, an optimization problem is formulated for finding the optimal (R, Q) policy, when the system is influenced by the budget limitation and a discount pricing simultaneously. Properties of the cost function are investigated and then an algorithm based on a one-dimensional search procedure is proposed for finding an optimal (R, Q) policy which minimizes the expected system costs .Keywords: (R, Q) policy, stochastic demand, backorders, limited resource, quantity discounts
Procedia PDF Downloads 6412453 SEM Image Classification Using CNN Architectures
Authors: Güzi̇n Ti̇rkeş, Özge Teki̇n, Kerem Kurtuluş, Y. Yekta Yurtseven, Murat Baran
Abstract:
A scanning electron microscope (SEM) is a type of electron microscope mainly used in nanoscience and nanotechnology areas. Automatic image recognition and classification are among the general areas of application concerning SEM. In line with these usages, the present paper proposes a deep learning algorithm that classifies SEM images into nine categories by means of an online application to simplify the process. The NFFA-EUROPE - 100% SEM data set, containing approximately 21,000 images, was used to train and test the algorithm at 80% and 20%, respectively. Validation was carried out using a separate data set obtained from the Middle East Technical University (METU) in Turkey. To increase the accuracy in the results, the Inception ResNet-V2 model was used in view of the Fine-Tuning approach. By using a confusion matrix, it was observed that the coated-surface category has a negative effect on the accuracy of the results since it contains other categories in the data set, thereby confusing the model when detecting category-specific patterns. For this reason, the coated-surface category was removed from the train data set, hence increasing accuracy by up to 96.5%.Keywords: convolutional neural networks, deep learning, image classification, scanning electron microscope
Procedia PDF Downloads 1252452 Empirical Investigation of Bullwhip Effect with Sensitivity Analysis in Supply Chain
Authors: Shoaib Yousaf
Abstract:
The main purpose of this research is to the empirical investigation of the bullwhip effect under sensitivity analysis in the two-tier supply chain. The simulation modeling technique has been applied in this research as a research methodology to see the sensitivity analysis of the bullwhip effect in the rice industry of Pakistan. The research comprises two case studies that have been chosen as a sample. The results of this research have confirmed that reduction in production delay reduces the bullwhip effect, which conforms to the time compressing paradigm and the significance of the reduction in production delay to lessen demand amplification. The result of this research also indicates that by increasing the value of time to adjust inventory decreases the bullwhip effect. Furthermore, by decreasing the value of alpha increases the damping effect of the exponential smoother, it is not surprising that it also reduces the bullwhip effect. Moreover, by reducing the value of time to work in progress also reduces the bullwhip effect. This research will help practitioners and operation managers to reduces the major costs of their products in three ways. They can reduce their i) inventory levels, ii) better utilize their capacity and iii) improve their forecasting techniques. However, this study is based on two tier supply chain, while in reality the supply chain has got many tiers. Hence, future work will be extended across more than two-tier supply chains.Keywords: bullwhip effect, rice industry, supply chain dynamics, simulation, sensitivity analysis
Procedia PDF Downloads 1442451 Differential Analysis: Crew Resource Management and Profiles on the Balanced Inventory of Desirable Responding
Authors: Charalambos C. Cleanthous, Ryan Sain, Tabitha Black, Stephen Vera, Suzanne Milton
Abstract:
A concern when administering questionnaires is whether the participant is providing information that is accurate. The results may be invalid because the person is trying to present oneself in an unrealistic positive manner referred to as ‘faking good’, or in an unrealistic negative manner known as ‘faking bad’. The Balanced Inventory of Desirable Responding (BIDR) was used to assess commercial pilots’ responses on the two subscales of the BIDR: impression management (IM) and self-deceptive enhancement (SDE) that result in high or low scores. Thus, the BIDR produces four valid profiles: IM low and SDE low, IM high and SDE low, IM low and SDE high, and IM high and SDE high. The various profiles were used to compare the respondents’ answers to crew resource management (CRM) items developed from the USA Federal Aviation Administration’s (FAA) guidelines for CRM composition and training. Of particular interest were the results on the IM subscale. The comparisons between those scoring high (lying or faking) versus those low on the IM suggest that there were significant differences regarding their views of the various dimensions of CRM. One of the more disconcerting conclusions is that the high IM scores suggest that the pilots were trying to impress rather than honestly answer the questions regarding their CRM training and practice.Keywords: USA commercial pilots, crew resource management, faking, social desirability
Procedia PDF Downloads 2562450 Mixed Integer Programming-Based One-Class Classification Method for Process Monitoring
Authors: Younghoon Kim, Seoung Bum Kim
Abstract:
One-class classification plays an important role in detecting outlier and abnormality from normal observations. In the previous research, several attempts were made to extend the scope of application of the one-class classification techniques to statistical process control problems. For most previous approaches, such as support vector data description (SVDD) control chart, the design of the control limits is commonly based on the assumption that the proportion of abnormal observations is approximately equal to an expected Type I error rate in Phase I process. Because of the limitation of the one-class classification techniques based on convex optimization, we cannot make the proportion of abnormal observations exactly equal to expected Type I error rate: controlling Type I error rate requires to optimize constraints with integer decision variables, but convex optimization cannot satisfy the requirement. This limitation would be undesirable in theoretical and practical perspective to construct effective control charts. In this work, to address the limitation of previous approaches, we propose the one-class classification algorithm based on the mixed integer programming technique, which can solve problems formulated with continuous and integer decision variables. The proposed method minimizes the radius of a spherically shaped boundary subject to the number of normal data to be equal to a constant value specified by users. By modifying this constant value, users can exactly control the proportion of normal data described by the spherically shaped boundary. Thus, the proportion of abnormal observations can be made theoretically equal to an expected Type I error rate in Phase I process. Moreover, analogous to SVDD, the boundary can be made to describe complex structures by using some kernel functions. New multivariate control chart applying the effectiveness of the algorithm is proposed. This chart uses a monitoring statistic to characterize the degree of being an abnormal point as obtained through the proposed one-class classification. The control limit of the proposed chart is established by the radius of the boundary. The usefulness of the proposed method was demonstrated through experiments with simulated and real process data from a thin film transistor-liquid crystal display.Keywords: control chart, mixed integer programming, one-class classification, support vector data description
Procedia PDF Downloads 1742449 Tea (Camellia sinensis (L.) O. Kuntze) Typology in Kenya: A Review
Authors: Joseph Kimutai Langat
Abstract:
Tea typology is the science of classifying tea. This study was carried out between November 2023 and July 2024, whose main objective was to investigate the typological classification nomenclature of processed tea in the world, narrowing down to Kenya. Centres of origin, historical background, tea growing region, scientific naming system, market, fermentation levels, processing/ oxidation levels and cultural reasons are used to classify tea at present. Of these, the most common typology is by oxidation, and more specifically, by the production methods within the oxidation categories. While the Asian tea producing countries categorises tea products based on the decreasing oxidation levels during the manufacturing process: black tea, green tea, oolong tea and instant tea, Kenya’s tea typology system is based on the degree of fermentation process, i.e. black tea, purple tea, green tea and white tea. Tea is also classified into five categories: black tea, green tea, white tea, oolong tea, and dark tea. Black tea is the main tea processed and exported in Kenya, manufactured mainly by withering, rolling, or by use of cutting-tearing-curling (CTC) method that ensures efficient conversion of leaf herbage to made tea, oxidizing, and drying before being sorted into different grades. It is from these varied typological methods that this review paper concludes that different regions of the world use different classification nomenclature. Therefore, since tea typology is not standardized, it is recommended that a global tea regulator dealing in tea classification be created to standardize tea typology, with domestic in-country regulatory bodies in tea growing countries accredited to implement the global-wide typological agreements and resolutions.Keywords: classification, fermentation, oxidation, tea, typology
Procedia PDF Downloads 40