Search results for: overview of porosity classification
3192 Ontology-Based Backpropagation Neural Network Classification and Reasoning Strategy for NoSQL and SQL Databases
Authors: Hao-Hsiang Ku, Ching-Ho Chi
Abstract:
Big data applications have become an imperative for many fields. Many researchers have been devoted into increasing correct rates and reducing time complexities. Hence, the study designs and proposes an Ontology-based backpropagation neural network classification and reasoning strategy for NoSQL big data applications, which is called ON4NoSQL. ON4NoSQL is responsible for enhancing the performances of classifications in NoSQL and SQL databases to build up mass behavior models. Mass behavior models are made by MapReduce techniques and Hadoop distributed file system based on Hadoop service platform. The reference engine of ON4NoSQL is the ontology-based backpropagation neural network classification and reasoning strategy. Simulation results indicate that ON4NoSQL can efficiently achieve to construct a high performance environment for data storing, searching, and retrieving.Keywords: Hadoop, NoSQL, ontology, back propagation neural network, high distributed file system
Procedia PDF Downloads 2623191 Durability of a Cementitious Matrix Based on Treated Sediments
Authors: Mahfoud Benzerzour, Mouhamadou Amar, Amine Safhi, Nor-Edine Abriak
Abstract:
Significant volumes of sediment are annually dredged in France and all over the world. These materials may, in fact, be used beneficially as supplementary cementitious material. This paper studies the durability of a new cement matrix based on marine dredged sediment of Dunkirk-Harbor (north of France). Several techniques are used to characterize the raw sediment such as physical properties, chemical analyses, and mineralogy. The XRD analysis revealed quartz, calcite, kaolinite as main mineral phases. In order to eliminate organic matter and activate some of those minerals, the sediment is calcined at a temperature of 850°C for 1h. Moreover, four blended mortars were formulated by mixing a portland cement (CEM I 52,5 N) and the calcined sediment as partial cement substitute (0%, 10%, 20% and 30%). Reference mortars, based on the blended cement, were then prepared. This re-use cannot be substantiating and efficient without a durability study. In this purpose, the following tests, mercury porosity, accessible water porosity, chloride permeability, freezing and thawing, external sulfate attack, alkali aggregates reaction, compressive and bending strength tests were conducted on those mortars. The results of most of those tests evidenced the fact that the mortar that contains 10% of the treated sediment is efficient and durable as the reference mortar itself. That would infer that the presence of these calcined sediment improves mortar general behavior.Keywords: sediment, characterization, calcination, substitution, durability
Procedia PDF Downloads 2573190 Advances in Machine Learning and Deep Learning Techniques for Image Classification and Clustering
Authors: R. Nandhini, Gaurab Mudbhari
Abstract:
Ranging from the field of health care to self-driving cars, machine learning and deep learning algorithms have revolutionized the field with the proper utilization of images and visual-oriented data. Segmentation, regression, classification, clustering, dimensionality reduction, etc., are some of the Machine Learning tasks that helped Machine Learning and Deep Learning models to become state-of-the-art models for the field where images are key datasets. Among these tasks, classification and clustering are essential but difficult because of the intricate and high-dimensional characteristics of image data. This finding examines and assesses advanced techniques in supervised classification and unsupervised clustering for image datasets, emphasizing the relative efficiency of Convolutional Neural Networks (CNNs), Vision Transformers (ViTs), Deep Embedded Clustering (DEC), and self-supervised learning approaches. Due to the distinctive structural attributes present in images, conventional methods often fail to effectively capture spatial patterns, resulting in the development of models that utilize more advanced architectures and attention mechanisms. In image classification, we investigated both CNNs and ViTs. One of the most promising models, which is very much known for its ability to detect spatial hierarchies, is CNN, and it serves as a core model in our study. On the other hand, ViT is another model that also serves as a core model, reflecting a modern classification method that uses a self-attention mechanism which makes them more robust as this self-attention mechanism allows them to lean global dependencies in images without relying on convolutional layers. This paper evaluates the performance of these two architectures based on accuracy, precision, recall, and F1-score across different image datasets, analyzing their appropriateness for various categories of images. In the domain of clustering, we assess DEC, Variational Autoencoders (VAEs), and conventional clustering techniques like k-means, which are used on embeddings derived from CNN models. DEC, a prominent model in the field of clustering, has gained the attention of many ML engineers because of its ability to combine feature learning and clustering into a single framework and its main goal is to improve clustering quality through better feature representation. VAEs, on the other hand, are pretty well known for using latent embeddings for grouping similar images without requiring for prior label by utilizing the probabilistic clustering method.Keywords: machine learning, deep learning, image classification, image clustering
Procedia PDF Downloads 143189 The Impact of Electrospinning Parameters on Surface Morphology and Chemistry of PHBV Fibers
Authors: Lukasz Kaniuk, Mateusz M. Marzec, Andrzej Bernasik, Urszula Stachewicz
Abstract:
Electrospinning is one of the commonly used methods to produce micro- or nano-fibers. The properties of electrospun fibers allow them to be used to produce tissue scaffolds, biodegradable bandages, or purification membranes. The morphology of the obtained fibers depends on the composition of the polymer solution as well as the processing parameters. Interesting properties such as high fiber porosity can be achieved by changing humidity during electrospinning. Moreover, by changing voltage polarity in electrospinning, we are able to alternate functional groups at the surface of fibers. In this study, electrospun fibers were made of natural, thermoplastic polyester – PHBV (poly(3-hydroxybutyric acid-co-3-hydrovaleric acid). The fibrous mats were obtained using both positive and negative voltage polarities, and their surface was characterized using X-ray photoelectron spectroscopy (XPS, Ulvac-Phi, Chigasaki, Japan). Furthermore, the effect of the humidity on surface morphology was investigated using scanning electron microscopy (SEM, Merlin Gemini II, Zeiss, Germany). Electrospun PHBV fibers produced with positive and negative voltage polarity had similar morphology and the average fiber diameter, 2.47 ± 0.21 µm and 2.44 ± 0.15 µm, respectively. The change of the voltage polarity had a significant impact on the reorientation of the carbonyl groups what consequently changed the surface potential of the electrospun PHBV fibers. The increase of humidity during electrospinning causes porosity in the surface structure of the fibers. In conclusion, we showed within our studies that the process parameters such as humidity and voltage polarity have a great influence on fiber morphology and chemistry, changing their functionality. Surface properties of polymer fiber have a significant impact on cell integration and attachment, which is very important in tissue engineering. The possibility of changing surface porosity allows the use of fibers in various tissue engineering and drug delivery systems. Acknowledgment: This study was conducted within 'Nanofiber-based sponges for atopic skin treatment' project., carried out within the First TEAM programme of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund, project no POIR.04.04.00-00- 4571/18-00.Keywords: cells integration, electrospun fiber, PHBV, surface characterization
Procedia PDF Downloads 1193188 Land Use Change Detection Using Satellite Images for Najran City, Kingdom of Saudi Arabia (KSA)
Authors: Ismail Elkhrachy
Abstract:
Determination of land use changing is an important component of regional planning for applications ranging from urban fringe change detection to monitoring change detection of land use. This data are very useful for natural resources management.On the other hand, the technologies and methods of change detection also have evolved dramatically during past 20 years. So it has been well recognized that the change detection had become the best methods for researching dynamic change of land use by multi-temporal remotely-sensed data. The objective of this paper is to assess, evaluate and monitor land use change surrounding the area of Najran city, Kingdom of Saudi Arabia (KSA) using Landsat images (June 23, 2009) and ETM+ image(June. 21, 2014). The post-classification change detection technique was applied. At last,two-time subset images of Najran city are compared on a pixel-by-pixel basis using the post-classification comparison method and the from-to change matrix is produced, the land use change information obtained.Three classes were obtained, urban, bare land and agricultural land from unsupervised classification method by using Erdas Imagine and ArcGIS software. Accuracy assessment of classification has been performed before calculating change detection for study area. The obtained accuracy is between 61% to 87% percent for all the classes. Change detection analysis shows that rapid growth in urban area has been increased by 73.2%, the agricultural area has been decreased by 10.5 % and barren area reduced by 7% between 2009 and 2014. The quantitative study indicated that the area of urban class has unchanged by 58.2 km〗^2, gained 70.3 〖km〗^2 and lost 16 〖km〗^2. For bare land class 586.4〖km〗^2 has unchanged, 53.2〖km〗^2 has gained and 101.5〖km〗^2 has lost. While agriculture area class, 20.2〖km〗^2 has unchanged, 31.2〖km〗^2 has gained and 37.2〖km〗^2 has lost.Keywords: land use, remote sensing, change detection, satellite images, image classification
Procedia PDF Downloads 5253187 The Necessity to Standardize Procedures of Providing Engineering Geological Data for Designing Road and Railway Tunneling Projects
Authors: Atefeh Saljooghi Khoshkar, Jafar Hassanpour
Abstract:
One of the main problems of the design stage relating to many tunneling projects is the lack of an appropriate standard for the provision of engineering geological data in a predefined format. In particular, this is more reflected in highway and railroad tunnel projects in which there is a number of tunnels and different professional teams involved. In this regard, comprehensive software needs to be designed using the accepted methods in order to help engineering geologists to prepare standard reports, which contain sufficient input data for the design stage. Regarding this necessity, applied software has been designed using macro capabilities and Visual Basic programming language (VBA) through Microsoft Excel. In this software, all of the engineering geological input data, which are required for designing different parts of tunnels, such as discontinuities properties, rock mass strength parameters, rock mass classification systems, boreability classification, the penetration rate, and so forth, can be calculated and reported in a standard format.Keywords: engineering geology, rock mass classification, rock mechanic, tunnel
Procedia PDF Downloads 813186 Polymerspolyaniline/CMK-3/Hydroquinone Composite Electrode for Supercapacitor Application
Authors: Hu-Cheng Weng, Jhen-Ting Huang, Chia-Chia Chang, An-Ya Lo
Abstract:
In this study, carbon mesoporous material, CMK-3, was adopted as supporting material for electroactive polymerspolyaniline (PANI), polyaniline, for supercapacitor application, where hydroquinone (HQ) was integrated to enhance the redox reaction of PANI. The results show that the addition of PANI improves the capacitance of electrode from 89 F/g (CMK-3) to 337 F/g (PANI/CMK-3), the addition of HQ furtherly improves the capacitance to 463 F/g (PANI/CMK-3/HQ). The PANI provides higher energy density and also acts as binder of the electrode; the CMK-3 provides higher electron double layer capacitance EDLC and stabilize the polyaniline by its highly porosity. With the addition of HQ, the capacitance of PANI/CMK-3 was further enhanced. In-situ analyses including cyclic voltammetry (CV), chronopotentiometry (CP), electron impedance spectrum (EIS) analyses were applied for electrode performance examination. For materials characterization, the crystal structure, morphology, microstructure, and porosity were examined by X-ray diffraction (XRD), scanning electron microscope (SEM), and transmission electron microscopy (TEM), and 77K N2 adsorption/desorption analyses, respectively. The effects of electrolyte pH value, PANI polymerization time, HQ concentration, and PANI/CMK-3 ratio on capacitance were discussed. The durability was also studied by long-term operation test. The results show that PANI/CMK-3/HQ with great potential for supercapacitor application. Finally, the potential of all-solid PANI/CMK-3/HQ based supercapacitor was successfully demonstrated.Keywords: CMK3, PANI, redox electrolyte, solid supercapacitor
Procedia PDF Downloads 1383185 The Effects of Lithofacies on Oil Enrichment in Lucaogou Formation Fine-Grained Sedimentary Rocks in Santanghu Basin, China
Authors: Guoheng Liu, Zhilong Huang
Abstract:
For more than the past ten years, oil and gas production from marine shale such as the Barnett shale. In addition, in recent years, major breakthroughs have also been made in lacustrine shale gas exploration, such as the Yanchang Formation of the Ordos Basin in China. Lucaogou Formation shale, which is also lacustrine shale, has also yielded a high production in recent years, for wells such as M1, M6, and ML2, yielding a daily oil production of 5.6 tons, 37.4 tons and 13.56 tons, respectively. Lithologic identification and classification of reservoirs are the base and keys to oil and gas exploration. Lithology and lithofacies obviously control the distribution of oil and gas in lithological reservoirs, so it is of great significance to describe characteristics of lithology and lithofacies of reservoirs finely. Lithofacies is an intrinsic property of rock formed under certain conditions of sedimentation. Fine-grained sedimentary rocks such as shale formed under different sedimentary conditions display great particularity and distinctiveness. Hence, to our best knowledge, no constant and unified criteria and methods exist for fine-grained sedimentary rocks regarding lithofacies definition and classification. Consequently, multi-parameters and multi-disciplines are necessary. A series of qualitative descriptions and quantitative analysis were used to figure out the lithofacies characteristics and its effect on oil accumulation of Lucaogou formation fine-grained sedimentary rocks in Santanghu basin. The qualitative description includes core description, petrographic thin section observation, fluorescent thin-section observation, cathode luminescence observation and scanning electron microscope observation. The quantitative analyses include X-ray diffraction, total organic content analysis, ROCK-EVAL.II Methodology, soxhlet extraction, porosity and permeability analysis and oil saturation analysis. Three types of lithofacies were mainly well-developed in this study area, which is organic-rich massive shale lithofacies, organic-rich laminated and cloddy hybrid sedimentary lithofacies and organic-lean massive carbonate lithofacies. Organic-rich massive shale lithofacies mainly include massive shale and tuffaceous shale, of which quartz and clay minerals are the major components. Organic-rich laminated and cloddy hybrid sedimentary lithofacies contain lamina and cloddy structure. Rocks from this lithofacies chiefly consist of dolomite and quartz. Organic-lean massive carbonate lithofacies mainly contains massive bedding fine-grained carbonate rocks, of which fine-grained dolomite accounts for the main part. Organic-rich massive shale lithofacies contain the highest content of free hydrocarbon and solid organic matter. Moreover, more pores were developed in organic-rich massive shale lithofacies. Organic-lean massive carbonate lithofacies contain the lowest content solid organic matter and develop the least amount of pores. Organic-rich laminated and cloddy hybrid sedimentary lithofacies develop the largest number of cracks and fractures. To sum up, organic-rich massive shale lithofacies is the most favorable type of lithofacies. Organic-lean massive carbonate lithofacies is impossible for large scale oil accumulation.Keywords: lithofacies classification, tuffaceous shale, oil enrichment, Lucaogou formation
Procedia PDF Downloads 2203184 Defect Classification of Hydrogen Fuel Pressure Vessels using Deep Learning
Authors: Dongju Kim, Youngjoo Suh, Hyojin Kim, Gyeongyeong Kim
Abstract:
Acoustic Emission Testing (AET) is widely used to test the structural integrity of an operational hydrogen storage container, and clustering algorithms are frequently used in pattern recognition methods to interpret AET results. However, the interpretation of AET results can vary from user to user as the tuning of the relevant parameters relies on the user's experience and knowledge of AET. Therefore, it is necessary to use a deep learning model to identify patterns in acoustic emission (AE) signal data that can be used to classify defects instead. In this paper, a deep learning-based model for classifying the types of defects in hydrogen storage tanks, using AE sensor waveforms, is proposed. As hydrogen storage tanks are commonly constructed using carbon fiber reinforced polymer composite (CFRP), a defect classification dataset is collected through a tensile test on a specimen of CFRP with an AE sensor attached. The performance of the classification model, using one-dimensional convolutional neural network (1-D CNN) and synthetic minority oversampling technique (SMOTE) data augmentation, achieved 91.09% accuracy for each defect. It is expected that the deep learning classification model in this paper, used with AET, will help in evaluating the operational safety of hydrogen storage containers.Keywords: acoustic emission testing, carbon fiber reinforced polymer composite, one-dimensional convolutional neural network, smote data augmentation
Procedia PDF Downloads 953183 Classification of Manufacturing Data for Efficient Processing on an Edge-Cloud Network
Authors: Onyedikachi Ulelu, Andrew P. Longstaff, Simon Fletcher, Simon Parkinson
Abstract:
The widespread interest in 'Industry 4.0' or 'digital manufacturing' has led to significant research requiring the acquisition of data from sensors, instruments, and machine signals. In-depth research then identifies methods of analysis of the massive amounts of data generated before and during manufacture to solve a particular problem. The ultimate goal is for industrial Internet of Things (IIoT) data to be processed automatically to assist with either visualisation or autonomous system decision-making. However, the collection and processing of data in an industrial environment come with a cost. Little research has been undertaken on how to specify optimally what data to capture, transmit, process, and store at various levels of an edge-cloud network. The first step in this specification is to categorise IIoT data for efficient and effective use. This paper proposes the required attributes and classification to take manufacturing digital data from various sources to determine the most suitable location for data processing on the edge-cloud network. The proposed classification framework will minimise overhead in terms of network bandwidth/cost and processing time of machine tool data via efficient decision making on which dataset should be processed at the ‘edge’ and what to send to a remote server (cloud). A fast-and-frugal heuristic method is implemented for this decision-making. The framework is tested using case studies from industrial machine tools for machine productivity and maintenance.Keywords: data classification, decision making, edge computing, industrial IoT, industry 4.0
Procedia PDF Downloads 1823182 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs
Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa
Abstract:
Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.Keywords: classification models, egg weight, fertilised eggs, multiple linear regression
Procedia PDF Downloads 883181 Local Directional Encoded Derivative Binary Pattern Based Coral Image Classification Using Weighted Distance Gray Wolf Optimization Algorithm
Authors: Annalakshmi G., Sakthivel Murugan S.
Abstract:
This paper presents a local directional encoded derivative binary pattern (LDEDBP) feature extraction method that can be applied for the classification of submarine coral reef images. The classification of coral reef images using texture features is difficult due to the dissimilarities in class samples. In coral reef image classification, texture features are extracted using the proposed method called local directional encoded derivative binary pattern (LDEDBP). The proposed approach extracts the complete structural arrangement of the local region using local binary batten (LBP) and also extracts the edge information using local directional pattern (LDP) from the edge response available in a particular region, thereby achieving extra discriminative feature value. Typically the LDP extracts the edge details in all eight directions. The process of integrating edge responses along with the local binary pattern achieves a more robust texture descriptor than the other descriptors used in texture feature extraction methods. Finally, the proposed technique is applied to an extreme learning machine (ELM) method with a meta-heuristic algorithm known as weighted distance grey wolf optimizer (GWO) to optimize the input weight and biases of single-hidden-layer feed-forward neural networks (SLFN). In the empirical results, ELM-WDGWO demonstrated their better performance in terms of accuracy on all coral datasets, namely RSMAS, EILAT, EILAT2, and MLC, compared with other state-of-the-art algorithms. The proposed method achieves the highest overall classification accuracy of 94% compared to the other state of art methods.Keywords: feature extraction, local directional pattern, ELM classifier, GWO optimization
Procedia PDF Downloads 1643180 Kannada HandWritten Character Recognition by Edge Hinge and Edge Distribution Techniques Using Manhatan and Minimum Distance Classifiers
Authors: C. V. Aravinda, H. N. Prakash
Abstract:
In this paper, we tried to convey fusion and state of art pertaining to SIL character recognition systems. In the first step, the text is preprocessed and normalized to perform the text identification correctly. The second step involves extracting relevant and informative features. The third step implements the classification decision. The three stages which involved are Data acquisition and preprocessing, Feature extraction, and Classification. Here we concentrated on two techniques to obtain features, Feature Extraction & Feature Selection. Edge-hinge distribution is a feature that characterizes the changes in direction of a script stroke in handwritten text. The edge-hinge distribution is extracted by means of a windowpane that is slid over an edge-detected binary handwriting image. Whenever the mid pixel of the window is on, the two edge fragments (i.e. connected sequences of pixels) emerging from this mid pixel are measured. Their directions are measured and stored as pairs. A joint probability distribution is obtained from a large sample of such pairs. Despite continuous effort, handwriting identification remains a challenging issue, due to different approaches use different varieties of features, having different. Therefore, our study will focus on handwriting recognition based on feature selection to simplify features extracting task, optimize classification system complexity, reduce running time and improve the classification accuracy.Keywords: word segmentation and recognition, character recognition, optical character recognition, hand written character recognition, South Indian languages
Procedia PDF Downloads 4973179 Music Genre Classification Based on Non-Negative Matrix Factorization Features
Authors: Soyon Kim, Edward Kim
Abstract:
In order to retrieve information from the massive stream of songs in the music industry, music search by title, lyrics, artist, mood, and genre has become more important. Despite the subjectivity and controversy over the definition of music genres across different nations and cultures, automatic genre classification systems that facilitate the process of music categorization have been developed. Manual genre selection by music producers is being provided as statistical data for designing automatic genre classification systems. In this paper, an automatic music genre classification system utilizing non-negative matrix factorization (NMF) is proposed. Short-term characteristics of the music signal can be captured based on the timbre features such as mel-frequency cepstral coefficient (MFCC), decorrelated filter bank (DFB), octave-based spectral contrast (OSC), and octave band sum (OBS). Long-term time-varying characteristics of the music signal can be summarized with (1) the statistical features such as mean, variance, minimum, and maximum of the timbre features and (2) the modulation spectrum features such as spectral flatness measure, spectral crest measure, spectral peak, spectral valley, and spectral contrast of the timbre features. Not only these conventional basic long-term feature vectors, but also NMF based feature vectors are proposed to be used together for genre classification. In the training stage, NMF basis vectors were extracted for each genre class. The NMF features were calculated in the log spectral magnitude domain (NMF-LSM) as well as in the basic feature vector domain (NMF-BFV). For NMF-LSM, an entire full band spectrum was used. However, for NMF-BFV, only low band spectrum was used since high frequency modulation spectrum of the basic feature vectors did not contain important information for genre classification. In the test stage, using the set of pre-trained NMF basis vectors, the genre classification system extracted the NMF weighting values of each genre as the NMF feature vectors. A support vector machine (SVM) was used as a classifier. The GTZAN multi-genre music database was used for training and testing. It is composed of 10 genres and 100 songs for each genre. To increase the reliability of the experiments, 10-fold cross validation was used. For a given input song, an extracted NMF-LSM feature vector was composed of 10 weighting values that corresponded to the classification probabilities for 10 genres. An NMF-BFV feature vector also had a dimensionality of 10. Combined with the basic long-term features such as statistical features and modulation spectrum features, the NMF features provided the increased accuracy with a slight increase in feature dimensionality. The conventional basic features by themselves yielded 84.0% accuracy, but the basic features with NMF-LSM and NMF-BFV provided 85.1% and 84.2% accuracy, respectively. The basic features required dimensionality of 460, but NMF-LSM and NMF-BFV required dimensionalities of 10 and 10, respectively. Combining the basic features, NMF-LSM and NMF-BFV together with the SVM with a radial basis function (RBF) kernel produced the significantly higher classification accuracy of 88.3% with a feature dimensionality of 480.Keywords: mel-frequency cepstral coefficient (MFCC), music genre classification, non-negative matrix factorization (NMF), support vector machine (SVM)
Procedia PDF Downloads 3033178 Decision Making System for Clinical Datasets
Authors: P. Bharathiraja
Abstract:
Computer Aided decision making system is used to enhance diagnosis and prognosis of diseases and also to assist clinicians and junior doctors in clinical decision making. Medical Data used for decision making should be definite and consistent. Data Mining and soft computing techniques are used for cleaning the data and for incorporating human reasoning in decision making systems. Fuzzy rule based inference technique can be used for classification in order to incorporate human reasoning in the decision making process. In this work, missing values are imputed using the mean or mode of the attribute. The data are normalized using min-ma normalization to improve the design and efficiency of the fuzzy inference system. The fuzzy inference system is used to handle the uncertainties that exist in the medical data. Equal-width-partitioning is used to partition the attribute values into appropriate fuzzy intervals. Fuzzy rules are generated using Class Based Associative rule mining algorithm. The system is trained and tested using heart disease data set from the University of California at Irvine (UCI) Machine Learning Repository. The data was split using a hold out approach into training and testing data. From the experimental results it can be inferred that classification using fuzzy inference system performs better than trivial IF-THEN rule based classification approaches. Furthermore it is observed that the use of fuzzy logic and fuzzy inference mechanism handles uncertainty and also resembles human decision making. The system can be used in the absence of a clinical expert to assist junior doctors and clinicians in clinical decision making.Keywords: decision making, data mining, normalization, fuzzy rule, classification
Procedia PDF Downloads 5183177 Dual-Channel Reliable Breast Ultrasound Image Classification Based on Explainable Attribution and Uncertainty Quantification
Authors: Haonan Hu, Shuge Lei, Dasheng Sun, Huabin Zhang, Kehong Yuan, Jian Dai, Jijun Tang
Abstract:
This paper focuses on the classification task of breast ultrasound images and conducts research on the reliability measurement of classification results. A dual-channel evaluation framework was developed based on the proposed inference reliability and predictive reliability scores. For the inference reliability evaluation, human-aligned and doctor-agreed inference rationals based on the improved feature attribution algorithm SP-RISA are gracefully applied. Uncertainty quantification is used to evaluate the predictive reliability via the test time enhancement. The effectiveness of this reliability evaluation framework has been verified on the breast ultrasound clinical dataset YBUS, and its robustness is verified on the public dataset BUSI. The expected calibration errors on both datasets are significantly lower than traditional evaluation methods, which proves the effectiveness of the proposed reliability measurement.Keywords: medical imaging, ultrasound imaging, XAI, uncertainty measurement, trustworthy AI
Procedia PDF Downloads 1033176 A Multi-Output Network with U-Net Enhanced Class Activation Map and Robust Classification Performance for Medical Imaging Analysis
Authors: Jaiden Xuan Schraut, Leon Liu, Yiqiao Yin
Abstract:
Computer vision in medical diagnosis has achieved a high level of success in diagnosing diseases with high accuracy. However, conventional classifiers that produce an image to-label result provides insufficient information for medical professionals to judge and raise concerns over the trust and reliability of a model with results that cannot be explained. In order to gain local insight into cancerous regions, separate tasks such as imaging segmentation need to be implemented to aid the doctors in treating patients, which doubles the training time and costs which renders the diagnosis system inefficient and difficult to be accepted by the public. To tackle this issue and drive AI-first medical solutions further, this paper proposes a multi-output network that follows a U-Net architecture for image segmentation output and features an additional convolutional neural networks (CNN) module for auxiliary classification output. Class activation maps are a method of providing insight into a convolutional neural network’s feature maps that leads to its classification but in the case of lung diseases, the region of interest is enhanced by U-net-assisted Class Activation Map (CAM) visualization. Therefore, our proposed model combines image segmentation models and classifiers to crop out only the lung region of a chest X-ray’s class activation map to provide a visualization that improves the explainability and is able to generate classification results simultaneously which builds trust for AI-led diagnosis systems. The proposed U-Net model achieves 97.61% accuracy and a dice coefficient of 0.97 on testing data from the COVID-QU-Ex Dataset which includes both diseased and healthy lungs.Keywords: multi-output network model, U-net, class activation map, image classification, medical imaging analysis
Procedia PDF Downloads 2043175 Effective Stiffness, Permeability, and Reduced Wall Shear Stress of Highly Porous Tissue Engineering Scaffolds
Authors: Hassan Mohammadi Khujin
Abstract:
Tissue engineering is the science of tissues and complex organs creation using scaffolds, cells and biologically active components. Most cells require scaffolds to grow and proliferate. These temporary support structures for tissue regeneration are later replaced with extracellular matrix produced inside the body. Recent advances in additive manufacturing methods allow production of highly porous, complex three dimensional scaffolds suitable for cell growth and proliferation. The current paper investigates the mechanical properties, including elastic modulus and compressive strength, as well as fluid flow dynamics, including permeability and flow-induced shear stress of scaffolds with four triply periodic minimal surface (TPMS) configurations, namely the Schwarz primitive, the Schwarz diamond, the gyroid, and the Neovius structures. Higher porosity in all scaffold types resulted in lower mechanical properties. The permeability of the scaffolds was determined using Darcy's law with reference to geometrical parameters and the pressure drop derived from the computational fluid dynamics (CFD) analysis. Higher porosity enhanced permeability and reduced wall shear stress in all scaffold designs.Keywords: highly porous scaffolds, tissue engineering, finite elements analysis, CFD analysis
Procedia PDF Downloads 763174 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 1683173 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanismsKeywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 1603172 Estimating CO₂ Storage Capacity under Geological Uncertainty Using 3D Geological Modeling of Unconventional Reservoir Rocks in Block nv32, Shenvsi Oilfield, China
Authors: Ayman Mutahar Alrassas, Shaoran Ren, Renyuan Ren, Hung Vo Thanh, Mohammed Hail Hakimi, Zhenliang Guan
Abstract:
The significant effect of CO₂ on global climate and the environment has gained more concern worldwide. Enhance oil recovery (EOR) associated with sequestration of CO₂ particularly into the depleted oil reservoir is considered the viable approach under financial limitations since it improves the oil recovery from the existing oil reservoir and boosts the relation between global-scale of CO₂ capture and geological sequestration. Consequently, practical measurements are required to attain large-scale CO₂ emission reduction. This paper presents an integrated modeling workflow to construct an accurate 3D reservoir geological model to estimate the storage capacity of CO₂ under geological uncertainty in an unconventional oil reservoir of the Paleogene Shahejie Formation (Es1) in the block Nv32, Shenvsi oilfield, China. In this regard, geophysical data, including well logs of twenty-two well locations and seismic data, were combined with geological and engineering data and used to construct a 3D reservoir geological modeling. The geological modeling focused on four tight reservoir units of the Shahejie Formation (Es1-x1, Es1-x2, Es1-x3, and Es1-x4). The validated 3D reservoir models were subsequently used to calculate the theoretical CO₂ storage capacity in the block Nv32, Shenvsi oilfield. Well logs were utilized to predict petrophysical properties such as porosity and permeability, and lithofacies and indicate that the Es1 reservoir units are mainly sandstone, shale, and limestone with a proportion of 38.09%, 32.42%, and 29.49, respectively. Well log-based petrophysical results also show that the Es1 reservoir units generally exhibit 2–36% porosity, 0.017 mD to 974.8 mD permeability, and moderate to good net to gross ratios. These estimated values of porosity, permeability, lithofacies, and net to gross were up-scaled and distributed laterally using Sequential Gaussian Simulation (SGS) and Simulation Sequential Indicator (SIS) methods to generate 3D reservoir geological models. The reservoir geological models show there are lateral heterogeneities of the reservoir properties and lithofacies, and the best reservoir rocks exist in the Es1-x4, Es1-x3, and Es1-x2 units, respectively. In addition, the reservoir volumetric of the Es1 units in block Nv32 was also estimated based on the petrophysical property models and fund to be between 0.554368Keywords: CO₂ storage capacity, 3D geological model, geological uncertainty, unconventional oil reservoir, block Nv32
Procedia PDF Downloads 1803171 Linguoculturological Analysis of Advertising: An Overview of Previous Researches
Authors: Brankica Bojovic
Abstract:
Every study of advertising is intrinsically multidisciplinary, as the researcher must take into account the linguistic, social, psychological, economic, political and cultural factors that have all played a significant role in the history of advertising. A linguoculturological analysis of advertising aims to provide insight into the ideologies and archetypal structures that abide in the discourse of advertising messages, and give an overview of the academic research in the area of linguistics, and cultural and social studies that contributed to the demystification of the discourse of advertising. As the process of globalisation is gaining momentum, so is the expansion of businesses and economies, and migration of the population. Yet, the uniqueness of individual cultures prevails, and demonstrates that the process of communication and translation are not only matters of linguistic, but of cultural transferral as well. Therefore, even the world of business and advertising, the world of fast food, fast production, fast living, is programmed in accordance with the uniqueness of those cultures. The fact that culture, beliefs, ideologies, values and societal expectations permeate every sphere of advertising will be addressed through illustrative examples.Keywords: culturology, ideology, linguistic analysis in advertising, linguistic and visual metaphors, propaganda, translation of advertisements
Procedia PDF Downloads 2893170 Surface Hole Defect Detection of Rolled Sheets Based on Pixel Classification Approach
Authors: Samira Taleb, Sakina Aoun, Slimane Ziani, Zoheir Mentouri, Adel Boudiaf
Abstract:
Rolling is a pressure treatment technique that modifies the shape of steel ingots or billets between rotating rollers. During this process, defects may form on the surface of the rolled sheets and are likely to affect the performance and quality of the finished product. In our study, we developed a method for detecting surface hole defects using a pixel classification approach. This work includes several steps. First, we performed image preprocessing to delimit areas with and without hole defects on the sheet image. Then, we developed the histograms of each area to generate the gray level membership intervals of the pixels that characterize each area. As we noticed an intersection between the characteristics of the gray level intervals of the images of the two areas, we finally performed a learning step based on a series of detection tests to refine the membership intervals of each area, and to choose the defect detection criterion in order to optimize the recognition of the surface hole.Keywords: classification, defect, surface, detection, hole
Procedia PDF Downloads 203169 Classification of EEG Signals Based on Dynamic Connectivity Analysis
Authors: Zoran Šverko, Saša Vlahinić, Nino Stojković, Ivan Markovinović
Abstract:
In this article, the classification of target letters is performed using data from the EEG P300 Speller paradigm. Neural networks trained with the results of dynamic connectivity analysis between different brain regions are used for classification. Dynamic connectivity analysis is based on the adaptive window size and the imaginary part of the complex Pearson correlation coefficient. Brain dynamics are analysed using the relative intersection of confidence intervals for the imaginary component of the complex Pearson correlation coefficient method (RICI-imCPCC). The RICI-imCPCC method overcomes the shortcomings of currently used dynamical connectivity analysis methods, such as the low reliability and low temporal precision for short connectivity intervals encountered in constant sliding window analysis with wide window size and the high susceptibility to noise encountered in constant sliding window analysis with narrow window size. This method overcomes these shortcomings by dynamically adjusting the window size using the RICI rule. This method extracts information about brain connections for each time sample. Seventy percent of the extracted brain connectivity information is used for training and thirty percent for validation. Classification of the target word is also done and based on the same analysis method. As far as we know, through this research, we have shown for the first time that dynamic connectivity can be used as a parameter for classifying EEG signals.Keywords: dynamic connectivity analysis, EEG, neural networks, Pearson correlation coefficients
Procedia PDF Downloads 2173168 Accuracy Analysis of the American Society of Anesthesiologists Classification Using ChatGPT
Authors: Jae Ni Jang, Young Uk Kim
Abstract:
Background: Chat Generative Pre-training Transformer-3 (ChatGPT; San Francisco, California, Open Artificial Intelligence) is an artificial intelligence chatbot based on a large language model designed to generate human-like text. As the usage of ChatGPT is increasing among less knowledgeable patients, medical students, and anesthesia and pain medicine residents or trainees, we aimed to evaluate the accuracy of ChatGPT-3 responses to questions about the American Society of Anesthesiologists (ASA) classification based on patients’ underlying diseases and assess the quality of the generated responses. Methods: A total of 47 questions were submitted to ChatGPT using textual prompts. The questions were designed for ChatGPT-3 to provide answers regarding ASA classification in response to common underlying diseases frequently observed in adult patients. In addition, we created 18 questions regarding the ASA classification for pediatric patients and pregnant women. The accuracy of ChatGPT’s responses was evaluated by cross-referencing with Miller’s Anesthesia, Morgan & Mikhail’s Clinical Anesthesiology, and the American Society of Anesthesiologists’ ASA Physical Status Classification System (2020). Results: Out of the 47 questions pertaining to adults, ChatGPT -3 provided correct answers for only 23, resulting in an accuracy rate of 48.9%. Furthermore, the responses provided by ChatGPT-3 regarding children and pregnant women were mostly inaccurate, as indicated by a 28% accuracy rate (5 out of 18). Conclusions: ChatGPT provided correct responses to questions relevant to the daily clinical routine of anesthesiologists in approximately half of the cases, while the remaining responses contained errors. Therefore, caution is advised when using ChatGPT to retrieve anesthesia-related information. Although ChatGPT may not yet be suitable for clinical settings, we anticipate significant improvements in ChatGPT and other large language models in the near future. Regular assessments of ChatGPT's ASA classification accuracy are essential due to the evolving nature of ChatGPT as an artificial intelligence entity. This is especially important because ChatGPT has a clinically unacceptable rate of error and hallucination, particularly in pediatric patients and pregnant women. The methodology established in this study may be used to continue evaluating ChatGPT.Keywords: American Society of Anesthesiologists, artificial intelligence, Chat Generative Pre-training Transformer-3, ChatGPT
Procedia PDF Downloads 503167 Novel Method of In-Situ Tracking of Mechanical Changes in Composite Electrodes during Charging-Discharging by QCM-D
Authors: M. D. Levi, Netanel Shpigel, Sergey Sigalov, Gregory Salitra, Leonid Daikhin, Doron Aurbach
Abstract:
We have developed an in-situ method for tracking ions adsorption into composite nanoporous carbon electrodes based on quartz-crystal microbalance (QCM). In these first papers QCM was used as a simple gravimetric probe of compositional changes in carbon porous composite electrodes during their charging since variation of the electrode potential did not change significantly width of the resonance. In contrast, when we passed from nanoporous carbons to a composite Li-ion battery material such as LiFePO4 olivine, the change in the resonance width was comparable with change of the resonance frequency (polymeric binder PVdF was shown to be completely rigid when used in aqueous solutions). We have provided a quantitative hydrodynamic admittance model of ion-insertion processes into electrode host accompanied by intercalation-induced dimensional changes of electrode particles, and hence the entire electrode coating. The change in electrode deformation and the related porosity modify hydrodynamic solid-liquid interactions tracked by QCM with dissipation monitoring. Using admittance modeling, we are able to evaluate the changes of effective thickness and permeability/porosity of composite electrode caused by applied potential and as a function of cycle number. This unique non-destructive technique may have great advantage in early diagnostics of cycling life durability of batteries and supercapacitors.Keywords: Li-ion batteries, particles deformations, QCM-D, viscoelasticity
Procedia PDF Downloads 4463166 Investigation on Ultrahigh Heat Flux of Nanoporous Membrane Evaporation Using Dimensionless Lattice Boltzmann Method
Authors: W. H. Zheng, J. Li, F. J. Hong
Abstract:
Thin liquid film evaporation in ultrathin nanoporous membranes, which reduce the viscous resistance while still maintaining high capillary pressure and efficient liquid delivery, is a promising thermal management approach for high-power electronic devices cooling. Given the challenges and technical limitations of experimental studies for accurate interface temperature sensing, complex manufacturing process, and short duration of membranes, a dimensionless lattice Boltzmann method capable of restoring thermophysical properties of working fluid is particularly derived. The evaporation of R134a to its pure vapour ambient in nanoporous membranes with the pore diameter of 80nm, thickness of 472nm, and three porosities of 0.25, 0.33 and 0.5 are numerically simulated. The numerical results indicate that the highest heat transfer coefficient is about 1740kW/m²·K; the highest heat flux is about 1.49kW/cm² with only about the wall superheat of 8.59K in the case of porosity equals to 0.5. The dissipated heat flux scaled with porosity because of the increasing effective evaporative area. Additionally, the self-regulation of the shape and curvature of the meniscus under different operating conditions is also observed. This work shows a promising approach to forecast the membrane performance for different geometry and working fluids.Keywords: high heat flux, ultrathin nanoporous membrane, thin film evaporation, lattice Boltzmann method
Procedia PDF Downloads 1633165 An Experimental Investigation of Chemical Enhanced Oil Recovery (Ceor) for Fractured Carbonate Reservoirs, Case Study: Kais Formation on Wakamuk Field
Authors: Jackson Andreas Theo Pola, Leksono Mucharam, Hari Oetomo, Budi Susanto, Wisnu Nugraha
Abstract:
About half of the world oil reserves are located in carbonate reservoirs, where 65% of the total carbonate reservoirs are oil wet and 12% intermediate wet [1]. Oil recovery in oil wet or mixed wet carbonate reservoirs can be increased by dissolving surfactant to injected water to change the rock wettability from oil wet to more water wet. The Wakamuk Field operated by PetroChina International (Bermuda) Ltd. and PT. Pertamina EP in Papua, produces from main reservoir of Miocene Kais Limestone. First production commenced on August, 2004 and the peak field production of 1456 BOPD occurred in August, 2010. It was found that is a complex reservoir system and until 2014 cumulative oil production was 2.07 MMBO, less than 9% of OOIP. This performance is indicative of presence of secondary porosity, other than matrix porosity which is of low average porosity 13% and permeability less than 7 mD. Implementing chemical EOR in this case is the best way to increase oil production. However, the selected chemical must be able to lower the interfacial tension (IFT), reduce oil viscosity, and alter the wettability; thus a special chemical treatment named SeMAR has been proposed. Numerous laboratory tests such as phase behavior test, core compatibility test, mixture viscosity, contact angle measurement, IFT, imbibitions test and core flooding were conducted on Wakamuk field samples. Based on the spontaneous imbibitions results for Wakamuk field core, formulation of SeMAR with compositional S12A gave oil recovery 43.94% at 1wt% concentration and maximum percentage of oil recovery 87.3% at 3wt% concentration respectively. In addition, the results for first scenario of core flooding test gave oil recovery 60.32% at 1 wt% concentration S12A and the second scenario gave 96.78% of oil recovery at concentration 3 wt% respectively. The soaking time of chemicals has a significant effect on the recovery and higher chemical concentrations affect larger areas for wettability and therefore, higher oil recovery. The chemical that gives best overall results from laboratory tests study will also be a consideration for Huff and Puff injections trial (pilot project) for increasing oil recovery from Wakamuk FieldKeywords: Wakamuk field, chemical treatment, oil recovery, viscosity
Procedia PDF Downloads 6933164 Providing Resilience: An Overview of the Actions in an Elderly Suburban Area in Rio de Janeiro
Authors: Alan Silva, Carla Cipolla
Abstract:
The increase of life expectancy in the world is a current challenge for governments, demanding solutions towards elderly people. In this context, service design and age-friendly design appear as an approach to create solutions which favor active aging by social inclusion and better life quality. In essence, the age-friendly design aims to include elderly people in the democratic process of creation in order to strengthen the participation and empowerment of them through intellectual, social, civic, recreational, cultural and spiritual activities. All of these activities aim to provide resilience to this segment by granting access to the reserves needed for adaptation and growth in the face of life's challenges. On that approach, the following research brings an overview of the actions related to the integration and social qualification of the elderly people, considering a suburban area of Rio de Janeiro. Based on Design Thinking presented by Brown (2009), this research has a qualitative-exploratory approach demanding certain necessities and actions, which are collected through observation and interviews about the daily life of the elderly community individuals searching for information about personal capacitation and social integration of the studied population. Subsequently, a critical analysis is done on this overview, pointing out the potentialities and limitations of these actions. At the end of the research, a well-being map of solutions classified as physical, mental and social is created, also indicating which current services are relevant and which activities can be transformed into services to that community. In conclusion, the contribution of this research is the construction of a map of solutions that provides resilience to the studied public and favors the concept of active aging in society. From this map of solutions, it is possible to discriminate what are the resources necessary for the solutions to be operationalized and their journeys with the users of the elderly segment.Keywords: resilience, age-friendly design, service design, active aging
Procedia PDF Downloads 983163 The Impact on the Composition of Survey Refusals΄ Demographic Profile When Implementing Different Classifications
Authors: Eva Tsouparopoulou, Maria Symeonaki
Abstract:
The internationally documented declining survey response rates of the last two decades are mainly attributed to refusals. In fieldwork, a refusal may be obtained not only from the respondent himself/herself, but from other sources on the respondent’s behalf, such as other household members, apartment building residents or administrator(s), and neighborhood residents. In this paper, we investigate how the composition of the demographic profile of survey refusals changes when different classifications are implemented and the classification issues arising from that. The analysis is based on the 2002-2018 European Social Survey (ESS) datasets for Belgium, Germany, and United Kingdom. For these three countries, the size of selected sample units coded as a type of refusal for all nine under investigation rounds was large enough to meet the purposes of the analysis. The results indicate the existence of four different possible classifications that can be implemented and the significance of choosing the one that strengthens the contrasts of the different types of respondents' demographic profiles. Since the foundation of social quantitative research lies in the triptych of definition, classification, and measurement, this study aims to identify the multiplicity of the definition of survey refusals as a methodological tool for the continually growing research on non-response.Keywords: non-response, refusals, European social survey, classification
Procedia PDF Downloads 86