Search results for: object based classification
29018 A Kruskal Based Heuxistic for the Application of Spanning Tree
Authors: Anjan Naidu
Abstract:
In this paper we first discuss the minimum spanning tree, then we use the Kruskal algorithm to obtain minimum spanning tree. Based on Kruskal algorithm we propose Kruskal algorithm to apply an application to find minimum cost applying the concept of spanning tree.Keywords: Minimum Spanning tree, algorithm, Heuxistic, application, classification of Sub 97K90
Procedia PDF Downloads 44429017 A Tool for Assessing Performance and Structural Quality of Business Process
Authors: Mariem Kchaou, Wiem Khlif, Faiez Gargouri
Abstract:
Modeling business processes is an essential task when evaluating, improving, or documenting existing business processes. To be efficient in such tasks, a business process model (BPM) must have high structural quality and high performance. Evidently, evaluating the performance of a business process model is a necessary step to reduce time, cost, while assessing the structural quality aims to improve the understandability and the modifiability of the BPMN model. To achieve these objectives, a set of structural and performance measures have been proposed. Since the diversity of measures, we propose a framework that integrates both structural and performance aspects for classifying them. Our measure classification is based on business process model perspectives (e.g., informational, functional, organizational, behavioral, and temporal), and the elements (activity, event, actor, etc.) involved in computing the measures. Then, we implement this framework in a tool assisting the structural quality and the performance of a business process. The tool helps the designers to select an appropriate subset of measures associated with the corresponding perspective and to calculate and interpret their values in order to improve the structural quality and the performance of the model.Keywords: performance, structural quality, perspectives, tool, classification framework, measures
Procedia PDF Downloads 15629016 Probabilistic Crash Prediction and Prevention of Vehicle Crash
Authors: Lavanya Annadi, Fahimeh Jafari
Abstract:
Transportation brings immense benefits to society, but it also has its costs. Costs include such as the cost of infrastructure, personnel and equipment, but also the loss of life and property in traffic accidents on the road, delays in travel due to traffic congestion and various indirect costs in terms of air transport. More research has been done to identify the various factors that affect road accidents, such as road infrastructure, traffic, sociodemographic characteristics, land use, and the environment. The aim of this research is to predict the probabilistic crash prediction of vehicles using machine learning due to natural and structural reasons by excluding spontaneous reasons like overspeeding etc., in the United States. These factors range from weather factors, like weather conditions, precipitation, visibility, wind speed, wind direction, temperature, pressure, and humidity to human made structures like road structure factors like bump, roundabout, no exit, turning loop, give away, etc. Probabilities are dissected into ten different classes. All the predictions are based on multiclass classification techniques, which are supervised learning. This study considers all crashes that happened in all states collected by the US government. To calculate the probability, multinomial expected value was used and assigned a classification label as the crash probability. We applied three different classification models, including multiclass Logistic Regression, Random Forest and XGBoost. The numerical results show that XGBoost achieved a 75.2% accuracy rate which indicates the part that is being played by natural and structural reasons for the crash. The paper has provided in-deep insights through exploratory data analysis.Keywords: road safety, crash prediction, exploratory analysis, machine learning
Procedia PDF Downloads 11129015 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis
Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya
Abstract:
In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.Keywords: cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis
Procedia PDF Downloads 32629014 Diagnosis of the Heart Rhythm Disorders by Using Hybrid Classifiers
Authors: Sule Yucelbas, Gulay Tezel, Cuneyt Yucelbas, Seral Ozsen
Abstract:
In this study, it was tried to identify some heart rhythm disorders by electrocardiography (ECG) data that is taken from MIT-BIH arrhythmia database by subtracting the required features, presenting to artificial neural networks (ANN), artificial immune systems (AIS), artificial neural network based on artificial immune system (AIS-ANN) and particle swarm optimization based artificial neural network (PSO-NN) classifier systems. The main purpose of this study is to evaluate the performance of hybrid AIS-ANN and PSO-ANN classifiers with regard to the ANN and AIS. For this purpose, the normal sinus rhythm (NSR), atrial premature contraction (APC), sinus arrhythmia (SA), ventricular trigeminy (VTI), ventricular tachycardia (VTK) and atrial fibrillation (AF) data for each of the RR intervals were found. Then these data in the form of pairs (NSR-APC, NSR-SA, NSR-VTI, NSR-VTK and NSR-AF) is created by combining discrete wavelet transform which is applied to each of these two groups of data and two different data sets with 9 and 27 features were obtained from each of them after data reduction. Afterwards, the data randomly was firstly mixed within themselves, and then 4-fold cross validation method was applied to create the training and testing data. The training and testing accuracy rates and training time are compared with each other. As a result, performances of the hybrid classification systems, AIS-ANN and PSO-ANN were seen to be close to the performance of the ANN system. Also, the results of the hybrid systems were much better than AIS, too. However, ANN had much shorter period of training time than other systems. In terms of training times, ANN was followed by PSO-ANN, AIS-ANN and AIS systems respectively. Also, the features that extracted from the data affected the classification results significantly.Keywords: AIS, ANN, ECG, hybrid classifiers, PSO
Procedia PDF Downloads 44229013 Trigonelline: A Promising Compound for The Treatment of Alzheimer's Disease
Authors: Mai M. Farid, Ximeng Yang, Tomoharu Kuboyama, Chihiro Tohda
Abstract:
Trigonelline is a major alkaloid component derived from Trigonella foenum-graecum L. (fenugreek) and has been reported before as a potential neuroprotective agent, especially in Alzheimer’s disease (AD). However, the previous data were unclear and used model mice were not well established. In the present study, the effect of trigonelline on memory function was investigated in Alzheimer’s disease transgenic model mouse, 5XFAD which overexpresses the mutated APP and PS1 genes. Oral administration of trigonelline for 14 days significantly enhanced object recognition and object location memories. Plasma and cerebral cortex were isolated at 30 min, 1h, 3h, and 6 h after oral administration of trigonelline. LC-MS/MS analysis indicated that trigonelline was detected in both plasma and cortex from 30 min after, suggesting good penetration of trigonelline into the brain. In addition, trigonelline significantly ameliorated axonal and dendrite atrophy in Amyloid β-treated cortical neurons. These results suggest that trigonelline could be a promising therapeutic candidate for AD.Keywords: alzheimer’s disease, cortical neurons, LC-MS/MS analysis, trigonelline
Procedia PDF Downloads 14729012 Enhancing Spatial Interpolation: A Multi-Layer Inverse Distance Weighting Model for Complex Regression and Classification Tasks in Spatial Data Analysis
Authors: Yakin Hajlaoui, Richard Labib, Jean-François Plante, Michel Gamache
Abstract:
This study introduces the Multi-Layer Inverse Distance Weighting Model (ML-IDW), inspired by the mathematical formulation of both multi-layer neural networks (ML-NNs) and Inverse Distance Weighting model (IDW). ML-IDW leverages ML-NNs' processing capabilities, characterized by compositions of learnable non-linear functions applied to input features, and incorporates IDW's ability to learn anisotropic spatial dependencies, presenting a promising solution for nonlinear spatial interpolation and learning from complex spatial data. it employ gradient descent and backpropagation to train ML-IDW, comparing its performance against conventional spatial interpolation models such as Kriging and standard IDW on regression and classification tasks using simulated spatial datasets of varying complexity. the results highlight the efficacy of ML-IDW, particularly in handling complex spatial datasets, exhibiting lower mean square error in regression and higher F1 score in classification.Keywords: deep learning, multi-layer neural networks, gradient descent, spatial interpolation, inverse distance weighting
Procedia PDF Downloads 5229011 Products in Early Development Phases: Ecological Classification and Evaluation Using an Interval Arithmetic Based Calculation Approach
Authors: Helen L. Hein, Joachim Schwarte
Abstract:
As a pillar of sustainable development, ecology has become an important milestone in research community, especially due to global challenges like climate change. The ecological performance of products can be scientifically conducted with life cycle assessments. In the construction sector, significant amounts of CO2 emissions are assigned to the energy used for building heating purposes. Therefore, sustainable construction materials for insulating purposes are substantial, whereby aerogels have been explored intensively in the last years due to their low thermal conductivity. Therefore, the WALL-ACE project aims to develop an aerogel-based thermal insulating plaster that would achieve minor thermal conductivities. But as in the early stage of development phases, a lot of information is still missing or not yet accessible, the ecological performance of innovative products bases increasingly on uncertain data that can lead to significant deviations in the results. To be able to predict realistically how meaningful the results are and how viable the developed products may be with regard to their corresponding respective market, these deviations however have to be considered. Therefore, a classification method is presented in this study, which may allow comparing the ecological performance of modern products with already established and competitive materials. In order to achieve this, an alternative calculation method was used that allows computing with lower and upper bounds to consider all possible values without precise data. The life cycle analysis of the considered products was conducted with an interval arithmetic based calculation method. The results lead to the conclusion that the interval solutions describing the possible environmental impacts are so wide that the result usability is limited. Nevertheless, a further optimization in reducing environmental impacts of aerogels seems to be needed to become more competitive in the future.Keywords: aerogel-based, insulating material, early development phase, interval arithmetic
Procedia PDF Downloads 14029010 Lexical Classification of Compounds in Berom: A Semantic Description of N-V Nominal Compounds
Authors: Pam Bitrus Marcus
Abstract:
Compounds in Berom, a Niger-Congo language that is spoken in parts of central Nigeria, have been understudied, and the semantics of N-V nominal compounds have not been sufficiently delineated. This study describes the lexical classification of compounds in Berom and, specifically, examines the semantics of nominal compounds with N-V constituents. The study relied on a data set of 200 compounds that were drawn from Bere Naha (a newsletter publication in Berom). Contrary to the nominalization process in defining the lexical class of compounds in languages, the study revealed that verbal and adjectival classes of compounds are also attested in Berom and N-V nominal compounds have an agentive or locative interpretation that is not solely determined by the meaning of the constituents of the compound but by the context of the usage.Keywords: berom, berom compounds, nominal compound, N-V compounds
Procedia PDF Downloads 7829009 Application of Fuzzy Clustering on Classification Agile Supply Chain Firms
Authors: Hamidreza Fallah Lajimi, Elham Karami, Alireza Arab, Fatemeh Alinasab
Abstract:
Being responsive is an increasingly important skill for firms in today’s global economy; thus firms must be agile. Naturally, it follows that an organization’s agility depends on its supply chain being agile. However, achieving supply chain agility is a function of other abilities within the organization. This paper analyses results from a survey of 71 Iran manufacturing companies in order to identify some of the factors for agile organizations in managing their supply chains. Then we classification this company in four cluster with fuzzy c-mean technique and with Four validations functional determine automatically the optimal number of clusters.Keywords: agile supply chain, clustering, fuzzy clustering, business engineering
Procedia PDF Downloads 71229008 Computer Aided Diagnostic System for Detection and Classification of a Brain Tumor through MRI Using Level Set Based Segmentation Technique and ANN Classifier
Authors: Atanu K Samanta, Asim Ali Khan
Abstract:
Due to the acquisition of huge amounts of brain tumor magnetic resonance images (MRI) in clinics, it is very difficult for radiologists to manually interpret and segment these images within a reasonable span of time. Computer-aided diagnosis (CAD) systems can enhance the diagnostic capabilities of radiologists and reduce the time required for accurate diagnosis. An intelligent computer-aided technique for automatic detection of a brain tumor through MRI is presented in this paper. The technique uses the following computational methods; the Level Set for segmentation of a brain tumor from other brain parts, extraction of features from this segmented tumor portion using gray level co-occurrence Matrix (GLCM), and the Artificial Neural Network (ANN) to classify brain tumor images according to their respective types. The entire work is carried out on 50 images having five types of brain tumor. The overall classification accuracy using this method is found to be 98% which is significantly good.Keywords: brain tumor, computer-aided diagnostic (CAD) system, gray-level co-occurrence matrix (GLCM), tumor segmentation, level set method
Procedia PDF Downloads 51129007 Voltage Problem Location Classification Using Performance of Least Squares Support Vector Machine LS-SVM and Learning Vector Quantization LVQ
Authors: M. Khaled Abduesslam, Mohammed Ali, Basher H. Alsdai, Muhammad Nizam Inayati
Abstract:
This paper presents the voltage problem location classification using performance of Least Squares Support Vector Machine (LS-SVM) and Learning Vector Quantization (LVQ) in electrical power system for proper voltage problem location implemented by IEEE 39 bus New-England. The data was collected from the time domain simulation by using Power System Analysis Toolbox (PSAT). Outputs from simulation data such as voltage, phase angle, real power and reactive power were taken as input to estimate voltage stability at particular buses based on Power Transfer Stability Index (PTSI).The simulation data was carried out on the IEEE 39 bus test system by considering load bus increased on the system. To verify of the proposed LS-SVM its performance was compared to Learning Vector Quantization (LVQ). The results showed that LS-SVM is faster and better as compared to LVQ. The results also demonstrated that the LS-SVM was estimated by 0% misclassification whereas LVQ had 7.69% misclassification.Keywords: IEEE 39 bus, least squares support vector machine, learning vector quantization, voltage collapse
Procedia PDF Downloads 44129006 Ensemble of Deep CNN Architecture for Classifying the Source and Quality of Teff Cereal
Authors: Belayneh Matebie, Michael Melese
Abstract:
The study focuses on addressing the challenges in classifying and ensuring the quality of Eragrostis Teff, a small and round grain that is the smallest cereal grain. Employing a traditional classification method is challenging because of its small size and the similarity of its environmental characteristics. To overcome this, this study employs a machine learning approach to develop a source and quality classification system for Teff cereal. Data is collected from various production areas in the Amhara regions, considering two types of cereal (high and low quality) across eight classes. A total of 5,920 images are collected, with 740 images for each class. Image enhancement techniques, including scaling, data augmentation, histogram equalization, and noise removal, are applied to preprocess the data. Convolutional Neural Network (CNN) is then used to extract relevant features and reduce dimensionality. The dataset is split into 80% for training and 20% for testing. Different classifiers, including FVGG16, FINCV3, QSCTC, EMQSCTC, SVM, and RF, are employed for classification, achieving accuracy rates ranging from 86.91% to 97.72%. The ensemble of FVGG16, FINCV3, and QSCTC using the Max-Voting approach outperforms individual algorithms.Keywords: Teff, ensemble learning, max-voting, CNN, SVM, RF
Procedia PDF Downloads 5329005 Identification and Classification of Fiber-Fortified Semolina by Near-Infrared Spectroscopy (NIR)
Authors: Amanda T. Badaró, Douglas F. Barbin, Sofia T. Garcia, Maria Teresa P. S. Clerici, Amanda R. Ferreira
Abstract:
Food fortification is the intentional addition of a nutrient in a food matrix and has been widely used to overcome the lack of nutrients in the diet or increasing the nutritional value of food. Fortified food must meet the demand of the population, taking into account their habits and risks that these foods may cause. Wheat and its by-products, such as semolina, has been strongly indicated to be used as a food vehicle since it is widely consumed and used in the production of other foods. These products have been strategically used to add some nutrients, such as fibers. Methods of analysis and quantification of these kinds of components are destructive and require lengthy sample preparation and analysis. Therefore, the industry has searched for faster and less invasive methods, such as Near-Infrared Spectroscopy (NIR). NIR is a rapid and cost-effective method, however, it is based on indirect measurements, yielding high amount of data. Therefore, NIR spectroscopy requires calibration with mathematical and statistical tools (Chemometrics) to extract analytical information from the corresponding spectra, as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). PCA is well suited for NIR, once it can handle many spectra at a time and be used for non-supervised classification. Advantages of the PCA, which is also a data reduction technique, is that it reduces the data spectra to a smaller number of latent variables for further interpretation. On the other hand, LDA is a supervised method that searches the Canonical Variables (CV) with the maximum separation among different categories. In LDA, the first CV is the direction of maximum ratio between inter and intra-class variances. The present work used a portable infrared spectrometer (NIR) for identification and classification of pure and fiber-fortified semolina samples. The fiber was added to semolina in two different concentrations, and after the spectra acquisition, the data was used for PCA and LDA to identify and discriminate the samples. The results showed that NIR spectroscopy associate to PCA was very effective in identifying pure and fiber-fortified semolina. Additionally, the classification range of the samples using LDA was between 78.3% and 95% for calibration and 75% and 95% for cross-validation. Thus, after the multivariate analysis such as PCA and LDA, it was possible to verify that NIR associated to chemometric methods is able to identify and classify the different samples in a fast and non-destructive way.Keywords: Chemometrics, fiber, linear discriminant analysis, near-infrared spectroscopy, principal component analysis, semolina
Procedia PDF Downloads 21229004 An Experimental Study for Assessing Email Classification Attributes Using Feature Selection Methods
Authors: Issa Qabaja, Fadi Thabtah
Abstract:
Email phishing classification is one of the vital problems in the online security research domain that have attracted several scholars due to its impact on the users payments performed daily online. One aspect to reach a good performance by the detection algorithms in the email phishing problem is to identify the minimal set of features that significantly have an impact on raising the phishing detection rate. This paper investigate three known feature selection methods named Information Gain (IG), Chi-square and Correlation Features Set (CFS) on the email phishing problem to separate high influential features from low influential ones in phishing detection. We measure the degree of influentially by applying four data mining algorithms on a large set of features. We compare the accuracy of these algorithms on the complete features set before feature selection has been applied and after feature selection has been applied. After conducting experiments, the results show 12 common significant features have been chosen among the considered features by the feature selection methods. Further, the average detection accuracy derived by the data mining algorithms on the reduced 12-features set was very slight affected when compared with the one derived from the 47-features set.Keywords: data mining, email classification, phishing, online security
Procedia PDF Downloads 43229003 Proposal for a Generic Context Meta-Model
Authors: Jaouadi Imen, Ben Djemaa Raoudha, Ben Abdallah Hanene
Abstract:
The access to relevant information that is adapted to users’ needs, preferences and environment is a challenge in many applications running. That causes an appearance of context-aware systems. To facilitate the development of this class of applications, it is necessary that these applications share a common context meta-model. In this article, we will present our context meta-model that is defined using the OMG Meta Object facility (MOF). This meta-model is based on the analysis and synthesis of context concepts proposed in literature.Keywords: context, meta-model, MOF, awareness system
Procedia PDF Downloads 56029002 A Review and Classification of Maritime Disasters: The Case of Saudi Arabia's Coastline
Authors: Arif Almutairi, Monjur Mourshed
Abstract:
Due to varying geographical and tectonic factors, the region of Saudi Arabia has been subjected to numerous natural and man-made maritime disasters during the last two decades. Natural maritime disasters, such as cyclones and tsunamis, have been recorded in coastal areas of the Indian Ocean (including the Arabian Sea and the Gulf of Aden). Therefore, the Indian Ocean is widely recognised as the potential source of future destructive natural disasters that could affect Saudi Arabia’s coastline. Meanwhile, man-made maritime disasters, such as those arising from piracy and oil pollution, are located in the Red Sea and the Arabian Gulf, which are key locations for oil export and transportation between Asia and Europe. This paper provides a brief overview of maritime disasters surrounding Saudi Arabia’s coastline in order to classify them by frequency of occurrence and location, and discuss their future impact the region. Results show that the Arabian Gulf will be more vulnerable to natural maritime disasters because of its location, whereas the Red Sea is more vulnerable to man-made maritime disasters, as it is the key location for transportation between Asia and Europe. The results also show that with the aid of proper classification, effective disaster management can reduce the consequences of maritime disasters.Keywords: disaster classification, maritime disaster, natural disasters, man-made disasters
Procedia PDF Downloads 18929001 A Framework for Auditing Multilevel Models Using Explainability Methods
Authors: Debarati Bhaumik, Diptish Dey
Abstract:
Multilevel models, increasingly deployed in industries such as insurance, food production, and entertainment within functions such as marketing and supply chain management, need to be transparent and ethical. Applications usually result in binary classification within groups or hierarchies based on a set of input features. Using open-source datasets, we demonstrate that popular explainability methods, such as SHAP and LIME, consistently underperform inaccuracy when interpreting these models. They fail to predict the order of feature importance, the magnitudes, and occasionally even the nature of the feature contribution (negative versus positive contribution to the outcome). Besides accuracy, the computational intractability of SHAP for binomial classification is a cause of concern. For transparent and ethical applications of these hierarchical statistical models, sound audit frameworks need to be developed. In this paper, we propose an audit framework for technical assessment of multilevel regression models focusing on three aspects: (i) model assumptions & statistical properties, (ii) model transparency using different explainability methods, and (iii) discrimination assessment. To this end, we undertake a quantitative approach and compare intrinsic model methods with SHAP and LIME. The framework comprises a shortlist of KPIs, such as PoCE (Percentage of Correct Explanations) and MDG (Mean Discriminatory Gap) per feature, for each of these three aspects. A traffic light risk assessment method is furthermore coupled to these KPIs. The audit framework will assist regulatory bodies in performing conformity assessments of AI systems using multilevel binomial classification models at businesses. It will also benefit businesses deploying multilevel models to be future-proof and aligned with the European Commission’s proposed Regulation on Artificial Intelligence.Keywords: audit, multilevel model, model transparency, model explainability, discrimination, ethics
Procedia PDF Downloads 9429000 Deep Learning Based 6D Pose Estimation for Bin-Picking Using 3D Point Clouds
Authors: Hesheng Wang, Haoyu Wang, Chungang Zhuang
Abstract:
Estimating the 6D pose of objects is a core step for robot bin-picking tasks. The problem is that various objects are usually randomly stacked with heavy occlusion in real applications. In this work, we propose a method to regress 6D poses by predicting three points for each object in the 3D point cloud through deep learning. To solve the ambiguity of symmetric pose, we propose a labeling method to help the network converge better. Based on the predicted pose, an iterative method is employed for pose optimization. In real-world experiments, our method outperforms the classical approach in both precision and recall.Keywords: pose estimation, deep learning, point cloud, bin-picking, 3D computer vision
Procedia PDF Downloads 16128999 Kantian Epistemology in Examination of the Axiomatic Principles of Economics: The Synthetic a Priori in the Economic Structure of Society
Authors: Mirza Adil Ahmad Mughal
Abstract:
Transcendental analytics, in the critique of pure reason, combines space and time as conditions of the possibility of the phenomenon from the transcendental aesthetic with the pure magnitude-intuition notion. The property of continuity as a qualitative result of the additive magnitude brings the possibility of connecting with experience, even though only as a potential because of the a priori necessity from assumption, as syntheticity of the a priori task of a scientific method of philosophy given by Kant, which precludes the application of categories to something not empirically reducible to the content of such a category's corresponding and possible object. This continuity as the qualitative result of a priori constructed notion of magnitude lies as a fundamental assumption and property of, what in Microeconomic theory is called as, 'choice rules' which combine the potentially-empirical and practical budget-price pairs with preference relations. This latter result is the purest qualitative side of the choice rules', otherwise autonomously, quantitative nature. The theoretical, barring the empirical, nature of this qualitative result is a synthetic a priori truth, which, if at all, it should be, if the axiomatic structure of the economic theory is held to be correct. It has a potentially verifiable content as its possible object in the form of quantitative price-budget pairs. Yet, the object that serves the respective Kantian category is qualitative itself, which is utility. This article explores the validity of Kantian qualifications for this application of 'categories' to the economic structure of society.Keywords: categories of understanding, continuity, convexity, psyche, revealed preferences, synthetic a priori
Procedia PDF Downloads 9828998 The Correlation between Hypomania, Creative Potential and Type of Major in Undergraduate Students
Authors: Dhea Kothari
Abstract:
There is an extensive amount of research that has examined the positive relationship between creativity and hypomania in terms of creative accomplishments, eminence, behaviors, occupations. Previous research had recruited participants based on creative occupations or stages of hypomania or bipolar disorder. This thesis focused on the relationship between hypomania and creative cognitive potential, such as divergent thinking and insight problem-solving. This was examined at an undergraduate educational level by recruiting students majoring in art, majoring in natural sciences (NSCI) and those double majoring in arts and NSCI. Participants were given a modified Alternate Uses Task (AUT) to measure divergent thinking and a set of rebus puzzles to measure insight problem-solving. Both tasks involved a level of overcoming functional fixedness. A negative association was observed between hypomania and originality of responses on the AUT when an object with low functional fixedness was given to all participants. On the other hand, a positive association was found between hypomania and originality of responses on the AUT when an object with high functional fixedness was given to the participants majoring in NSCI. Therefore, the research suggests that an increased ability to overcome functional fixedness might be central to individuals with hypomania and individuals with higher creative cognitive potential.Keywords: creative cognition, convergent thinking, creativity, divergent thinking, insight, major type, problem-solving
Procedia PDF Downloads 9428997 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model
Authors: Yepeng Cheng, Yasuhiko Morimoto
Abstract:
Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.Keywords: customer value, Huff's Gravity Model, POS, Retailer
Procedia PDF Downloads 12328996 3D Vision Transformer for Cervical Spine Fracture Detection and Classification
Authors: Obulesh Avuku, Satwik Sunnam, Sri Charan Mohan Janthuka, Keerthi Yalamaddi
Abstract:
In the United States alone, there are over 1.5 million spine fractures per year, resulting in about 17,730 spinal cord injuries. The cervical spine is where fractures in the spine most frequently occur. The prevalence of spinal fractures in the elderly has increased, and in this population, fractures may be harder to see on imaging because of coexisting degenerative illness and osteoporosis. Nowadays, computed tomography (CT) is almost completely used instead of radiography for the imaging diagnosis of adult spine fractures (x-rays). To stop neurologic degeneration and paralysis following trauma, it is vital to trace any vertebral fractures at the earliest. Many approaches have been proposed for the classification of the cervical spine [2d models]. We are here in this paper trying to break the bounds and use the vision transformers, a State-Of-The-Art- Model in image classification, by making minimal changes possible to the architecture of ViT and making it 3D-enabled architecture and this is evaluated using a weighted multi-label logarithmic loss. We have taken this problem statement from a previously held Kaggle competition, i.e., RSNA 2022 Cervical Spine Fracture Detection.Keywords: cervical spine, spinal fractures, osteoporosis, computed tomography, 2d-models, ViT, multi-label logarithmic loss, Kaggle, public score, private score
Procedia PDF Downloads 11428995 Predictive Analysis for Big Data: Extension of Classification and Regression Trees Algorithm
Authors: Ameur Abdelkader, Abed Bouarfa Hafida
Abstract:
Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.Keywords: predictive analysis, big data, predictive analysis algorithms, CART algorithm
Procedia PDF Downloads 14228994 The Awareness of Computer Science Students Regarding the Security of Location Based Games
Authors: Jacques Barnard, Magda Huisman, Gunther R. Drevin
Abstract:
Rapid expansion and development in die mobile technology market has created an opportunity for users to participate in location based games. As a consequence of this fast expanding market and new technology, it is important to be aware of the implications this has on security. This paper measures the impact on the security awareness of games’ participants, as well as on that of students at university level with regards to their various stages of input in years of studying and gamer classification. This serves to provide insight into the matter as to discernible differences in the awareness of the security implications concerning these technologies. The data was accumulated via a web questionnaire that was to be completed yearly by students from respective year groups. Results signify a meaningful disparity in security awareness among students completing the varying study years and research. This awareness, however, does not always impact on gamers.Keywords: gamer classifications, location based games, location based data, security awareness
Procedia PDF Downloads 29228993 Comparison Of Data Mining Models To Predict Future Bridge Conditions
Authors: Pablo Martinez, Emad Mohamed, Osama Mohsen, Yasser Mohamed
Abstract:
Highway and bridge agencies, such as the Ministry of Transportation in Ontario, use the Bridge Condition Index (BCI) which is defined as the weighted condition of all bridge elements to determine the rehabilitation priorities for its bridges. Therefore, accurate forecasting of BCI is essential for bridge rehabilitation budgeting planning. The large amount of data available in regard to bridge conditions for several years dictate utilizing traditional mathematical models as infeasible analysis methods. This research study focuses on investigating different classification models that are developed to predict the bridge condition index in the province of Ontario, Canada based on the publicly available data for 2800 bridges over a period of more than 10 years. The data preparation is a key factor to develop acceptable classification models even with the simplest one, the k-NN model. All the models were tested, compared and statistically validated via cross validation and t-test. A simple k-NN model showed reasonable results (within 0.5% relative error) when predicting the bridge condition in an incoming year.Keywords: asset management, bridge condition index, data mining, forecasting, infrastructure, knowledge discovery in databases, maintenance, predictive models
Procedia PDF Downloads 19128992 Classification on Statistical Distributions of a Complex N-Body System
Authors: David C. Ni
Abstract:
Contemporary models for N-body systems are based on temporal, two-body, and mass point representation of Newtonian mechanics. Other mainstream models include 2D and 3D Ising models based on local neighborhood the lattice structures. In Quantum mechanics, the theories of collective modes are for superconductivity and for the long-range quantum entanglement. However, these models are still mainly for the specific phenomena with a set of designated parameters. We are therefore motivated to develop a new construction directly from the complex-variable N-body systems based on the extended Blaschke functions (EBF), which represent a non-temporal and nonlinear extension of Lorentz transformation on the complex plane – the normalized momentum spaces. A point on the complex plane represents a normalized state of particle momentums observed from a reference frame in the theory of special relativity. There are only two key parameters, normalized momentum and nonlinearity for modelling. An algorithm similar to Jenkins-Traub method is adopted for solving EBF iteratively. Through iteration, the solution sets show a form of σ + i [-t, t], where σ and t are the real numbers, and the [-t, t] shows various distributions, such as 1-peak, 2-peak, and 3-peak etc. distributions and some of them are analog to the canonical distributions. The results of the numerical analysis demonstrate continuum-to-discreteness transitions, evolutional invariance of distributions, phase transitions with conjugate symmetry, etc., which manifest the construction as a potential candidate for the unification of statistics. We hereby classify the observed distributions on the finite convergent domains. Continuous and discrete distributions both exist and are predictable for given partitions in different regions of parameter-pair. We further compare these distributions with canonical distributions and address the impacts on the existing applications.Keywords: blaschke, lorentz transformation, complex variables, continuous, discrete, canonical, classification
Procedia PDF Downloads 30928991 Evidence of the Effect of the Structure of Social Representations on Group Identification
Authors: Eric Bonetto, Anthony Piermatteo, Fabien Girandola, Gregory Lo Monaco
Abstract:
The present contribution focuses on the effect of the structure of social representations on group identification. A social representation (SR) is defined as an organized and structured set of cognitions, produced and shared by members of a same group about a same social object. Within this framework, the central core theory establishes a structural distinction between central cognitions – or 'core' – and peripheral ones: the former are theoretically considered as more connected than the later to group members’ social identity and may play a greater role in SRs’ ability to allow group identification by means of a common vision of the object of representation. Indeed, the central core provides a reference point for the in-group as it constitutes a consensual vision that gives meaning to a social object particularly important to individuals and to the group. However, while numerous contributions clearly refer to the underlying role of SRs in group identification, there are only few empirical evidences of this aspect. Thus, we hypothesize an effect of the structure of SRs on group identification. More precisely, central cognitions (vs. peripheral ones) will lead to a stronger group identification. In addition, we hypothesize that the refutation of a cognition will lead to a stronger group identification than its activation. The SR mobilized here is that of 'studying' among a population of first-year undergraduate psychology students. Thus, a pretest (N = 82), using an Attribute-Challenge Technique, was designed in order to identify the central and the peripheral cognitions to use in the primings of our main study. The results of this pretest are in line with previous studies. Then, the main study (online; N = 184), using a social priming methodology, was based on a 2 (Structural status of the cognitions belonging to the prime: central vs. peripheral) x 2 (Type of prime: activation vs. refutation) experimental design in order to test our hypotheses. Results revealed, as expected, the main effect of the structure of the SR on group identification. Indeed, central cognitions trigger a higher level of identification than the peripheral ones. However, we observe neither effect of the type of prime, nor interaction effect. These results experimentally demonstrate for the first time the effect of the structure of SRs on group identification and indicate that central cognitions are more connected than peripheral ones to group members’ social identity. These results will be discussed considering the importance of understanding identity as a function of SRs and on their ability to potentially solve the lack of consideration of the definition of the group in Social Representations Theory.Keywords: group identification, social identity, social representations, structural approach
Procedia PDF Downloads 19128990 Performance Comparison of Tablet Devices and Medical Diagnostic Display Devices Using Digital Object Patterns in PACS Environment
Authors: Yan-Lin Liu, Cheng-Ting Shih, Jay Wu
Abstract:
Tablet devices have been introduced into the medical environment in recent years. The performance of display can be varied based on the use of different hardware specifications and types of display technologies. Therefore, the differences between tablet devices and medical diagnostic LCDs have to be verified to ensure that image quality is not jeopardized for clinical diagnosis in a picture archiving and communication system (PACS). In this study, a set of randomized object test patterns (ROTPs) were developed, which included randomly located spheres in abdominal CT images. Five radiologists were asked to independently review the CT images on different generations of iPads and a diagnostic monochrome medical LCD monitor. Receiver operating characteristic (ROC) analysis was performed by using a five-point rating scale, and the average area under curve (AUC) and average reading time (ART) were calculated. The AUC values for the second generation iPad, iPad mini, iPad Air, and monochrome medical monitor were 0.712, 0.717, 0.725, and 0.740, respectively. The differences between iPads were not significant. The ARTs were 177 min and 127 min for iPad mini and medical LCD monitor, respectively. A significant difference appeared (p = 0.04). The results show that the iPads were slightly inferior to the monochrome medical LCD monitor. However, tablet devices possess advantages in portability and versatility, which can improve the convenience of rapid diagnosis and teleradiology. With advances in display technology, the applicability of tablet devices and mobile devices may be more diversified in PACS.Keywords: tablet devices, PACS, receiver operating characteristic, LCD monitor
Procedia PDF Downloads 48028989 An Efficient Machine Learning Model to Detect Metastatic Cancer in Pathology Scans Using Principal Component Analysis Algorithm, Genetic Algorithm, and Classification Algorithms
Authors: Bliss Singhal
Abstract:
Machine learning (ML) is a branch of Artificial Intelligence (AI) where computers analyze data and find patterns in the data. The study focuses on the detection of metastatic cancer using ML. Metastatic cancer is the stage where cancer has spread to other parts of the body and is the cause of approximately 90% of cancer-related deaths. Normally, pathologists spend hours each day to manually classifying whether tumors are benign or malignant. This tedious task contributes to mislabeling metastasis being over 60% of the time and emphasizes the importance of being aware of human error and other inefficiencies. ML is a good candidate to improve the correct identification of metastatic cancer, saving thousands of lives and can also improve the speed and efficiency of the process, thereby taking fewer resources and time. So far, the deep learning methodology of AI has been used in research to detect cancer. This study is a novel approach to determining the potential of using preprocessing algorithms combined with classification algorithms in detecting metastatic cancer. The study used two preprocessing algorithms: principal component analysis (PCA) and the genetic algorithm, to reduce the dimensionality of the dataset and then used three classification algorithms: logistic regression, decision tree classifier, and k-nearest neighbors to detect metastatic cancer in the pathology scans. The highest accuracy of 71.14% was produced by the ML pipeline comprising of PCA, the genetic algorithm, and the k-nearest neighbor algorithm, suggesting that preprocessing and classification algorithms have great potential for detecting metastatic cancer.Keywords: breast cancer, principal component analysis, genetic algorithm, k-nearest neighbors, decision tree classifier, logistic regression
Procedia PDF Downloads 81