Search results for: image quality metrics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12635

Search results for: image quality metrics

12035 Hybrid Algorithm for Frequency Channel Selection in Wi-Fi Networks

Authors: Cesar Hernández, Diego Giral, Ingrid Páez

Abstract:

This article proposes a hybrid algorithm for spectrum allocation in cognitive radio networks based on the algorithms Analytical Hierarchical Process (AHP) and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to improve the performance of the spectrum mobility of secondary users in cognitive radio networks. To calculate the level of performance of the proposed algorithm a comparative analysis between the proposed AHP-TOPSIS, Grey Relational Analysis (GRA) and Multiplicative Exponent Weighting (MEW) algorithm is performed. Four evaluation metrics is used. These metrics are the accumulative average of failed handoffs, the accumulative average of handoffs performed, the accumulative average of transmission bandwidth, and the accumulative average of the transmission delay. The results of the comparison show that AHP-TOPSIS Algorithm provides 2.4 times better performance compared to a GRA Algorithm and, 1.5 times better than the MEW Algorithm.

Keywords: cognitive radio, decision making, hybrid algorithm, spectrum handoff, wireless networks

Procedia PDF Downloads 546
12034 Underwater Image Enhancement and Reconstruction Using CNN and the MultiUNet Model

Authors: Snehal G. Teli, R. J. Shelke

Abstract:

CNN and MultiUNet models are the framework for the proposed method for enhancing and reconstructing underwater images. Multiscale merging of features and regeneration are both performed by the MultiUNet. CNN collects relevant features. Extensive tests on benchmark datasets show that the proposed strategy performs better than the latest methods. As a result of this work, underwater images can be represented and interpreted in a number of underwater applications with greater clarity. This strategy will advance underwater exploration and marine research by enhancing real-time underwater image processing systems, underwater robotic vision, and underwater surveillance.

Keywords: convolutional neural network, image enhancement, machine learning, multiunet, underwater images

Procedia PDF Downloads 85
12033 Performance Analysis of Proprietary and Non-Proprietary Tools for Regression Testing Using Genetic Algorithm

Authors: K. Hema Shankari, R. Thirumalaiselvi, N. V. Balasubramanian

Abstract:

The present paper addresses to the research in the area of regression testing with emphasis on automated tools as well as prioritization of test cases. The uniqueness of regression testing and its cyclic nature is pointed out. The difference in approach between industry, with business model as basis, and academia, with focus on data mining, is highlighted. Test Metrics are discussed as a prelude to our formula for prioritization; a case study is further discussed to illustrate this methodology. An industrial case study is also described in the paper, where the number of test cases is so large that they have to be grouped as Test Suites. In such situations, a genetic algorithm proposed by us can be used to reconfigure these Test Suites in each cycle of regression testing. The comparison is made between a proprietary tool and an open source tool using the above-mentioned metrics. Our approach is clarified through several tables.

Keywords: APFD metric, genetic algorithm, regression testing, RFT tool, test case prioritization, selenium tool

Procedia PDF Downloads 445
12032 Glucose Monitoring System Using Machine Learning Algorithms

Authors: Sangeeta Palekar, Neeraj Rangwani, Akash Poddar, Jayu Kalambe

Abstract:

The bio-medical analysis is an indispensable procedure for identifying health-related diseases like diabetes. Monitoring the glucose level in our body regularly helps us identify hyperglycemia and hypoglycemia, which can cause severe medical problems like nerve damage or kidney diseases. This paper presents a method for predicting the glucose concentration in blood samples using image processing and machine learning algorithms. The glucose solution is prepared by the glucose oxidase (GOD) and peroxidase (POD) method. An experimental database is generated based on the colorimetric technique. The image of the glucose solution is captured by the raspberry pi camera and analyzed using image processing by extracting the RGB, HSV, LUX color space values. Regression algorithms like multiple linear regression, decision tree, RandomForest, and XGBoost were used to predict the unknown glucose concentration. The multiple linear regression algorithm predicts the results with 97% accuracy. The image processing and machine learning-based approach reduce the hardware complexities of existing platforms.

Keywords: artificial intelligence glucose detection, glucose oxidase, peroxidase, image processing, machine learning

Procedia PDF Downloads 209
12031 Developing a Grading System for Restaurants

Authors: Joseph Roberson, Carina Kleynhans, Willie Coetzee

Abstract:

The low entry barriers of the restaurant industry lead to an extremely competitive business environment. In this volatile business sector it is of the utmost importance to implement a strategy of quality differentiation. Vital aspects of a quality differentiation strategy are total quality management, benchmarking and service quality management. Ultimately, restaurant success depends on the continuous support of customers. Customers select restaurants based on their expectations of quality. If the customers' expectations are met, they perceive quality service and will re-patronize the restaurant. The restaurateur can manage perceptions of quality by influencing expectations while ensuring that those expectations are not inflated. The management of expectations can be done by communicating service quality to customers. The aim of this research paper is to describe the development of a grading process for restaurants. An assessment of the extensive body of literature on grading was conducted through content analysis. A standardized method for developing a grading system would assist in successful grading systems that could inform both customers and restaurateurs of restaurant quality.

Keywords: benchmarking, restaurants, grading, service quality, total quality management

Procedia PDF Downloads 340
12030 Low-Cost Image Processing System for Evaluating Pavement Surface Distress

Authors: Keerti Kembhavi, M. R. Archana, V. Anjaneyappa

Abstract:

Most asphalt pavement condition evaluation use rating frameworks in which asphalt pavement distress is estimated by type, extent, and severity. Rating is carried out by the pavement condition rating (PCR), which is tedious and expensive. This paper presents the development of a low-cost technique for image pavement distress analysis that permits the identification of pothole and cracks. The paper explores the application of image processing tools for the detection of potholes and cracks. Longitudinal cracking and pothole are detected using Fuzzy-C- Means (FCM) and proceeded with the Spectral Theory algorithm. The framework comprises three phases, including image acquisition, processing, and extraction of features. A digital camera (Gopro) with the holder is used to capture pavement distress images on a moving vehicle. FCM classifier and Spectral Theory algorithms are used to compute features and classify the longitudinal cracking and pothole. The Matlab2016Ra Image preparing tool kit utilizes performance analysis to identify the viability of pavement distress on selected urban stretches of Bengaluru city, India. The outcomes of image evaluation with the utilization semi-computerized image handling framework represented the features of longitudinal crack and pothole with an accuracy of about 80%. Further, the detected images are validated with the actual dimensions, and it is seen that dimension variability is about 0.46. The linear regression model y=1.171x-0.155 is obtained using the existing and experimental / image processing area. The R2 correlation square obtained from the best fit line is 0.807, which is considered in the linear regression model to be ‘large positive linear association’.

Keywords: crack detection, pothole detection, spectral clustering, fuzzy-c-means

Procedia PDF Downloads 185
12029 Beyond Taguchi’s Concept of the Quality Loss Function

Authors: Atul Dev, Pankaj Jha

Abstract:

Dr. Genichi Taguchi looked at quality in a broader term and gave an excellent definition of quality in terms of loss to society. However the scope of this definition is limited to the losses imparted by a poor quality product to the customer only and are considered during the useful life of the product and further in a certain situation this loss can even be zero. In this paper, it has been proposed that the scope of quality of a product shall be further enhanced by considering the losses imparted by a poor quality product to society at large, due to associated environmental and safety related factors, over the complete life cycle of the product. Moreover, though these losses can be further minimized with the use of techno-safety interventions, the net losses to society however can never be made zero. This paper proposes an entirely new approach towards defining product quality and is based on Taguchi’s definition of quality.

Keywords: existing concept, goal post philosophy, life cycle, proposed concept, quality loss function

Procedia PDF Downloads 317
12028 Heuristic Spatial-Spectral Hyperspectral Image Segmentation Using Bands Quartile Box Plot Profiles

Authors: Mohamed A. Almoghalis, Osman M. Hegazy, Ibrahim F. Imam, Ali H. Elbastawessy

Abstract:

This paper presents a new hyperspectral image segmentation scheme with respect to both spatial and spectral contexts. The scheme uses the 8-pixels spatial pattern to build a weight structure that holds the number of outlier bands for each pixel among its neighborhood windows in different directions. The number of outlier bands for a pixel is obtained using bands quartile box plots profile among spatial 8-pixels pattern windows. The quartile box plot weight structure represents the spatial-spectral context in the image. Instead of starting segmentation process by single pixels, the proposed methodology starts by pixels groups that proved to share the same spectral features with respect to their spatial context. As a result, the segmentation scheme starts with Jigsaw pieces that build a mosaic image. The following step builds a model for each Jigsaw piece in the mosaic image. Each Jigsaw piece will be merged with another Jigsaw piece using KNN applied to their bands' quartile box plots profiles. The scheme iterates till required number of segments reached. Experiments use two data sets obtained from Earth Observer 1 (EO-1) sensor for Egypt and France. Initial results qualitative analysis showed encouraging results compared with ground truth. Quantitative analysis for the results will be included in the final paper.

Keywords: hyperspectral image segmentation, image processing, remote sensing, box plot

Procedia PDF Downloads 606
12027 The Impact of Milk Transport on Its Quality

Authors: Urszula Malaga-Toboła, Marek Gugała, Rafał Kornas, Robert Rusinek, Marek Gancarz

Abstract:

The work focused on presenting the elements that determine the quality of fresh milk in the context of the quality of its transport. The quality of the raw material depends on the quality of transport. Milk transport involves many activities in which, apart from the temperature and sterility of the means of transport, it is important not to expose the raw material to shocks. Recently, there have been changes in the milk supply chain, thus affecting the logistics processes between its links. Based on the conducted research and analyses, it was found that the condition of the road surface on which milk is transported affects its quality. For the T1 milk transport route- gravel roads of very poor and poor quality, the lowest number of bacteria and the highest number of somatic cells, fat content, and temperature of the transported milk were obtained. A well-organized integrated transport system is a real need for most companies today. The analysis showed significant differences in the quality of milk delivered to the dairy.

Keywords: fresh milk, transport, milk quality, dairy

Procedia PDF Downloads 89
12026 Construction Quality Perception of Construction Professionals and Their Expectations from a Quality Improvement Technique in Pakistan

Authors: Muhammad Yousaf Sadiq

Abstract:

The complexity arises in defining the construction quality due to its perception, based on inherent market conditions and their requirements, the diversified stakeholders itself and their desired output. An quantitative survey based approach was adopted in this constructive study. A questionnaire-based survey was conducted for the assessment of construction Quality perception and expectations in the context of quality improvement technique. The survey feedback of professionals of the leading construction organizations/companies of Pakistan construction industry were analyzed. The financial capacity, organizational structure, and construction experience of the construction firms formed basis for their selection. The quality perception was found to be project-scope-oriented and considered as an excess cost for a construction project. Any quality improvement technique was expected to maximize the profit for the employer, by improving the productivity in a construction project. The study is beneficial for the construction professionals to assess the prevailing construction quality perception and the expectations from implementation of any quality improvement technique in construction projects.

Keywords: construction quality, expectation, improvement, perception

Procedia PDF Downloads 480
12025 Neighborhood Graph-Optimized Preserving Discriminant Analysis for Image Feature Extraction

Authors: Xiaoheng Tan, Xianfang Li, Tan Guo, Yuchuan Liu, Zhijun Yang, Hongye Li, Kai Fu, Yufang Wu, Heling Gong

Abstract:

The image data collected in reality often have high dimensions, and it contains noise and redundant information. Therefore, it is necessary to extract the compact feature expression of the original perceived image. In this process, effective use of prior knowledge such as data structure distribution and sample label is the key to enhance image feature discrimination and robustness. Based on the above considerations, this paper proposes a local preserving discriminant feature learning model based on graph optimization. The model has the following characteristics: (1) Locality preserving constraint can effectively excavate and preserve the local structural relationship between data. (2) The flexibility of graph learning can be improved by constructing a new local geometric structure graph using label information and the nearest neighbor threshold. (3) The L₂,₁ norm is used to redefine LDA, and the diagonal matrix is introduced as the scale factor of LDA, and the samples are selected, which improves the robustness of feature learning. The validity and robustness of the proposed algorithm are verified by experiments in two public image datasets.

Keywords: feature extraction, graph optimization local preserving projection, linear discriminant analysis, L₂, ₁ norm

Procedia PDF Downloads 155
12024 Social Media Idea Ontology: A Concept for Semantic Search of Product Ideas in Customer Knowledge through User-Centered Metrics and Natural Language Processing

Authors: Martin H¨ausl, Maximilian Auch, Johannes Forster, Peter Mandl, Alexander Schill

Abstract:

In order to survive on the market, companies must constantly develop improved and new products. These products are designed to serve the needs of their customers in the best possible way. The creation of new products is also called innovation and is primarily driven by a company’s internal research and development department. However, a new approach has been taking place for some years now, involving external knowledge in the innovation process. This approach is called open innovation and identifies customer knowledge as the most important source in the innovation process. This paper presents a concept of using social media posts as an external source to support the open innovation approach in its initial phase, the Ideation phase. For this purpose, the social media posts are semantically structured with the help of an ontology and the authors are evaluated using graph-theoretical metrics such as density. For the structuring and evaluation of relevant social media posts, we also use the findings of Natural Language Processing, e. g. Named Entity Recognition, specific dictionaries, Triple Tagger and Part-of-Speech-Tagger. The selection and evaluation of the tools used are discussed in this paper. Using our ontology and metrics to structure social media posts enables users to semantically search these posts for new product ideas and thus gain an improved insight into the external sources such as customer needs.

Keywords: idea ontology, innovation management, semantic search, open information extraction

Procedia PDF Downloads 192
12023 A Ratio-Weighted Decision Tree Algorithm for Imbalance Dataset Classification

Authors: Doyin Afolabi, Phillip Adewole, Oladipupo Sennaike

Abstract:

Most well-known classifiers, including the decision tree algorithm, can make predictions on balanced datasets efficiently. However, the decision tree algorithm tends to be biased towards imbalanced datasets because of the skewness of the distribution of such datasets. To overcome this problem, this study proposes a weighted decision tree algorithm that aims to remove the bias toward the majority class and prevents the reduction of majority observations in imbalance datasets classification. The proposed weighted decision tree algorithm was tested on three imbalanced datasets- cancer dataset, german credit dataset, and banknote dataset. The specificity, sensitivity, and accuracy metrics were used to evaluate the performance of the proposed decision tree algorithm on the datasets. The evaluation results show that for some of the weights of our proposed decision tree, the specificity, sensitivity, and accuracy metrics gave better results compared to that of the ID3 decision tree and decision tree induced with minority entropy for all three datasets.

Keywords: data mining, decision tree, classification, imbalance dataset

Procedia PDF Downloads 143
12022 An Audit on the Quality of Pre-Operative Intra-Oral Digital Radiographs Taken for Dental Extractions in a General Practice Setting

Authors: Gabrielle O'Donoghue

Abstract:

Background: Pre-operative radiographs facilitate assessment and treatment planning in minor oral surgery. Quality assurance for dental radiography advocates the As Low As Reasonably Achievable (ALARA) principle in collecting accurate diagnostic information. Aims: To audit the quality of digital intraoral periapicals (IOPAs) taken prior to dental extractions in a metropolitan general dental practice setting. Standards: The National Radiological Protection Board (NRPB) guidance outlines three grades of radiograph quality: excellent (Grade 1 > 70% of total exposures), diagnostically acceptable (Grade 2 <20%), and unacceptable (Grade 3 <10%). Methodology: A study of pre-operative radiographs taken prior to dental extractions across 12 private general dental practices in a large metropolitan area by 44 practitioners. A total of 725 extractions were assessed, allowing 258 IOPAs to be reviewed in one audit cycle. Results: First cycle: Of 258 IOPAs: 223(86.4%) scored Grade 1, 27(10.5%) Grade 2, and 8(3.1%) Grade 3. The standard was met. 35 dental extractions were performed without an available pre-operative radiograph. Action Plan & Recommendations: Results were distributed to all staff and a continuous professional development evening organized to outline recommendations to improve image quality. A second audit cycle is proposed at a six-month interval to review the recommendations and appraise results. Conclusion: The overall standard of radiographs met the published guidelines. A significant improvement in the number of procedures undertaken without pre-operative imaging is expected at a six-month interval period. An investigation into undiagnostic imaging and associated adverse patient outcomes is being considered. Maintenance of the standards achieved is predicted in the second audit cycle to ensure consistent high quality imaging.

Keywords: audit, oral radiology, oral surgery, periapical radiographs, quality assurance

Procedia PDF Downloads 170
12021 Development of Performance Measures for the Implementation of Total Quality Management in Indian Industry

Authors: Perminderjit Singh, Sukhvir Singh

Abstract:

Total Quality Management (TQM) refers to management methods used to enhance quality and productivity in business organizations. Total Quality Management (TQM) has become a frequently used term in discussions concerning quality. Total Quality management has brought rise in demands on the organizations policy and the customers have gained more importance in the organizations focus. TQM is considered as an important management tool, which helps the organizations to satisfy their customers. In present research critical success factors includes management commitment, customer satisfaction, continuous improvement, work culture and environment, supplier quality management, training and development, employee satisfaction and product/process design are studied. A questionnaire is developed to implement these critical success factors in implementation of total quality management in Indian industry. Questionnaires filled by consulting different industrial organizations. Data collected from questionnaires is analyzed by descriptive and importance indexes.

Keywords: total quality management, critical success factor, employee satisfaction, supplier quality management, customer focus, quality information, quality measurement

Procedia PDF Downloads 481
12020 Dark and Bright Envelopes for Dehazing Images

Authors: Zihan Yu, Kohei Inoue, Kiichi Urahama

Abstract:

We present a method for de-hazing images. A dark envelope image is derived with the bilateral minimum filter and a bright envelope is derived with the bilateral maximum filter. The ambient light and transmission of the scene are estimated from these two envelope images. An image without haze is reconstructed from the estimated ambient light and transmission.

Keywords: image dehazing, bilateral minimum filter, bilateral maximum filter, local contrast

Procedia PDF Downloads 268
12019 The Image of a Flight Attendant Career: A Case Study of High School Students in Bangkok, Thailand

Authors: Kevin Wongleedee

Abstract:

The purposes of this research were to study the image of a flight attendant career from the perspective of high school students in Bangkok and to study the level of interest to pursue a flight attendant career. A probability random sampling of 400 students was utilized. Half the sample group came from private high schools and the other half came from public high schools. A questionnaire was used to collect the data and small in-depth interviews were also used to get their opinions about the image and their level of interest in the flight attendant career. The findings revealed that the majority of respondents had a medium level of interest in the flight attendant career. High school students who majored in Math-English were more interested in a flight attendant career than high school students who majored in Science-Math with a 0.05 level of significance. The image of flight attendant career was rated as a good career with a chance to travel to many countries. The image of flight attendance career can be ranked as follows: a career with a chance to travel, a career with ability to speak English, a career that requires punctuality, a career with a good service mind, and a career with an understanding of details. The findings from the in-depth interviews revealed that the major obstacles that prevented high school students from choosing a flight attendant as a career were their ability to speak English, their body proportions, and lack of information.

Keywords: flight attendant, high school students, image, media engineering

Procedia PDF Downloads 374
12018 Investigating Kinetics and Mathematical Modeling of Batch Clarification Process for Non-Centrifugal Sugar Production

Authors: Divya Vats, Sanjay Mahajani

Abstract:

The clarification of sugarcane juice plays a pivotal role in the production of non-centrifugal sugar (NCS), profoundly influencing the quality of the final NCS product. In this study, we have investigated the kinetics and mathematical modeling of the batch clarification process. The turbidity of the clarified cane juice (NTU) emerges as the determinant of the end product’s color. Moreover, this parameter underscores the significance of considering other variables as performance indicators for accessing the efficacy of the clarification process. Temperature-controlled experiments were meticulously conducted in a laboratory-scale batch mode. The primary objective was to discern the essential and optimized parameters crucial for augmenting the clarity of cane juice. Additionally, we explored the impact of pH and flocculant loading on the kinetics. Particle Image Velocimetry (PIV) is employed to comprehend the particle-particle and fluid-particle interaction. This technique facilitated a comprehensive understanding, paving the way for the subsequent multiphase computational fluid dynamics (CFD) simulations using the Eulerian-Lagrangian approach in the Ansys fluent. Impressively, these simulations accurately replicated comparable velocity profiles. The final mechanism of this study helps to make a mathematical model and presents a valuable framework for transitioning from the traditional batch process to a continuous process. The ultimate aim is to attain heightened productivity and unwavering consistency in product quality.

Keywords: non-centrifugal sugar, particle image velocimetry, computational fluid dynamics, mathematical modeling, turbidity

Procedia PDF Downloads 74
12017 ICanny: CNN Modulation Recognition Algorithm

Authors: Jingpeng Gao, Xinrui Mao, Zhibin Deng

Abstract:

Aiming at the low recognition rate on the composite signal modulation in low signal to noise ratio (SNR), this paper proposes a modulation recognition algorithm based on ICanny-CNN. Firstly, the radar signal is transformed into the time-frequency image by Choi-Williams Distribution (CWD). Secondly, we propose an image processing algorithm using the Guided Filter and the threshold selection method, which is combined with the hole filling and the mask operation. Finally, the shallow convolutional neural network (CNN) is combined with the idea of the depth-wise convolution (Dw Conv) and the point-wise convolution (Pw Conv). The proposed CNN is designed to complete image classification and realize modulation recognition of radar signal. The simulation results show that the proposed algorithm can reach 90.83% at 0dB and 71.52% at -8dB. Therefore, the proposed algorithm has a good classification and anti-noise performance in radar signal modulation recognition and other fields.

Keywords: modulation recognition, image processing, composite signal, improved Canny algorithm

Procedia PDF Downloads 194
12016 Advancing Phenological Understanding of Plants/Trees Through Phenocam Digital Time-lapse Images

Authors: Siddhartha Khare, Suyash Khare

Abstract:

Phenology, a crucial discipline in ecology, offers insights into the seasonal dynamics of organisms within natural ecosystems and the underlying environmental triggers. Leveraging the potent capabilities of digital repeat photography, PhenoCams capture invaluable data on the phenology of crops, plants, and trees. These cameras yield digital imagery in Red Green Blue (RGB) color channels, and some advanced systems even incorporate Near Infrared (NIR) bands. This study presents compelling case studies employing PhenoCam technology to unravel the phenology of black spruce trees. Through the analysis of RGB color channels, a range of essential color metrics including red chromatic coordinate (RCC), green chromatic coordinate (GCC), blue chromatic coordinate (BCC), vegetation contrast index (VCI), and excess green index (ExGI) are derived. These metrics illuminate variations in canopy color across seasons, shedding light on bud and leaf development. This, in turn, facilitates a deeper understanding of phenological events and aids in delineating the growth periods of trees and plants. The initial phase of this study addresses critical questions surrounding the fidelity of continuous canopy greenness records in representing bud developmental phases. Additionally, it discerns which color-based index most accurately tracks the seasonal variations in tree phenology within evergreen forest ecosystems. The subsequent section of this study delves into the transition dates of black spruce (Picea mariana (Mill.) B.S.P.) phenology. This is achieved through a fortnightly comparative analysis of the MODIS normalized difference vegetation index (NDVI) and the enhanced vegetation index (EVI). By employing PhenoCam technology and leveraging advanced color metrics, this study significantly advances our comprehension of black spruce tree phenology, offering valuable insights for ecological research and management.

Keywords: phenology, remote sensing, phenocam, color metrics, NDVI, GCC

Procedia PDF Downloads 65
12015 Image Analysis for Obturator Foramen Based on Marker-controlled Watershed Segmentation and Zernike Moments

Authors: Seda Sahin, Emin Akata

Abstract:

Obturator foramen is a specific structure in pelvic bone images and recognition of it is a new concept in medical image processing. Moreover, segmentation of bone structures such as obturator foramen plays an essential role for clinical research in orthopedics. In this paper, we present a novel method to analyze the similarity between the substructures of the imaged region and a hand drawn template, on hip radiographs to detect obturator foramen accurately with integrated usage of Marker-controlled Watershed segmentation and Zernike moment feature descriptor. Marker-controlled Watershed segmentation is applied to seperate obturator foramen from the background effectively. Zernike moment feature descriptor is used to provide matching between binary template image and the segmented binary image for obturator foramens for final extraction. The proposed method is tested on randomly selected 100 hip radiographs. The experimental results represent that our method is able to segment obturator foramens with % 96 accuracy.

Keywords: medical image analysis, segmentation of bone structures on hip radiographs, marker-controlled watershed segmentation, zernike moment feature descriptor

Procedia PDF Downloads 439
12014 A Data-Driven Monitoring Technique Using Combined Anomaly Detectors

Authors: Fouzi Harrou, Ying Sun, Sofiane Khadraoui

Abstract:

Anomaly detection based on Principal Component Analysis (PCA) was studied intensively and largely applied to multivariate processes with highly cross-correlated process variables. Monitoring metrics such as the Hotelling's T2 and the Q statistics are usually used in PCA-based monitoring to elucidate the pattern variations in the principal and residual subspaces, respectively. However, these metrics are ill suited to detect small faults. In this paper, the Exponentially Weighted Moving Average (EWMA) based on the Q and T statistics, T2-EWMA and Q-EWMA, were developed for detecting faults in the process mean. The performance of the proposed methods was compared with that of the conventional PCA-based fault detection method using synthetic data. The results clearly show the benefit and the effectiveness of the proposed methods over the conventional PCA method, especially for detecting small faults in highly correlated multivariate data.

Keywords: data-driven method, process control, anomaly detection, dimensionality reduction

Procedia PDF Downloads 301
12013 A Comparative Study of Medical Image Segmentation Methods for Tumor Detection

Authors: Mayssa Bensalah, Atef Boujelben, Mouna Baklouti, Mohamed Abid

Abstract:

Image segmentation has a fundamental role in analysis and interpretation for many applications. The automated segmentation of organs and tissues throughout the body using computed imaging has been rapidly increasing. Indeed, it represents one of the most important parts of clinical diagnostic tools. In this paper, we discuss a thorough literature review of recent methods of tumour segmentation from medical images which are briefly explained with the recent contribution of various researchers. This study was followed by comparing these methods in order to define new directions to develop and improve the performance of the segmentation of the tumour area from medical images.

Keywords: features extraction, image segmentation, medical images, tumor detection

Procedia PDF Downloads 171
12012 Crop Classification using Unmanned Aerial Vehicle Images

Authors: Iqra Yaseen

Abstract:

One of the well-known areas of computer science and engineering, image processing in the context of computer vision has been essential to automation. In remote sensing, medical science, and many other fields, it has made it easier to uncover previously undiscovered facts. Grading of diverse items is now possible because of neural network algorithms, categorization, and digital image processing. Its use in the classification of agricultural products, particularly in the grading of seeds or grains and their cultivars, is widely recognized. A grading and sorting system enables the preservation of time, consistency, and uniformity. Global population growth has led to an increase in demand for food staples, biofuel, and other agricultural products. To meet this demand, available resources must be used and managed more effectively. Image processing is rapidly growing in the field of agriculture. Many applications have been developed using this approach for crop identification and classification, land and disease detection and for measuring other parameters of crop. Vegetation localization is the base of performing these task. Vegetation helps to identify the area where the crop is present. The productivity of the agriculture industry can be increased via image processing that is based upon Unmanned Aerial Vehicle photography and satellite. In this paper we use the machine learning techniques like Convolutional Neural Network, deep learning, image processing, classification, You Only Live Once to UAV imaging dataset to divide the crop into distinct groups and choose the best way to use it.

Keywords: image processing, UAV, YOLO, CNN, deep learning, classification

Procedia PDF Downloads 115
12011 Maximum Entropy Based Image Segmentation of Human Skin Lesion

Authors: Sheema Shuja Khattak, Gule Saman, Imran Khan, Abdus Salam

Abstract:

Image segmentation plays an important role in medical imaging applications. Therefore, accurate methods are needed for the successful segmentation of medical images for diagnosis and detection of various diseases. In this paper, we have used maximum entropy to achieve image segmentation. Maximum entropy has been calculated using Shannon, Renyi, and Tsallis entropies. This work has novelty based on the detection of skin lesion caused by the bite of a parasite called Sand Fly causing the disease is called Cutaneous Leishmaniasis.

Keywords: shannon, maximum entropy, Renyi, Tsallis entropy

Procedia PDF Downloads 466
12010 The Quality of Accounting Information of Private Companies in the Czech Republic

Authors: Kateřina Struhařová

Abstract:

The paper gives the evidence of quality of accounting information of Czech private companies. In general the private companies in the Czech Republic do not see the benefits of providing accounting information of high quality. Based on the research of financial statements of entrepreneurs and companies in Zlin region it was confirmed that the quality of accounting information differs among the private entities and that the major impact on the accounting information quality has the fact if the financial statements are audited as well as the size of the entity. Also the foreign shareholders and lenders have some impact on the accounting information quality.

Keywords: accounting information quality, financial statements, Czech Republic, private companies

Procedia PDF Downloads 309
12009 Model for Assessment of Quality Airport Services

Authors: Cristina da Silva Torres, José Luis Duarte Ribeiro, Maria Auxiliadora Cannarozzo Tinoco

Abstract:

As a result of the rapid growth of the Brazilian Air Transport, many airports are at the limit of their capacities and have a reduction in the quality of services provided. Thus, there is a need of models for assessing the quality of airport services. Because of this, the main objective of this work is to propose a model for the evaluation of quality attributes in airport services. To this end, we used the method composed by literature review and interview. Structured a working method composed by 5 steps, which resulted in a model to evaluate the quality of airport services, consisting of 8 dimensions and 45 attributes. Was used as base for model definition the process mapping of boarding and landing processes of passengers and luggage. As a contribution of this work is the integration of management process with structuring models to assess the quality of services in airport environments.

Keywords: quality airport services, model for identification of attributes quality, air transport, passenger

Procedia PDF Downloads 539
12008 An Evaluation of ISO 9001:2008 and ISO 9001:2015 Standard Changes in Quality Management System

Authors: Filiz Ersoz, Deniz Merdin, Taner Ersoz

Abstract:

The objective of this study provides an insight into enterprises, who need to carry on their sustainability in harmony with the changing competition conditions, technology and laws, regarding the ISO 9001:2015. In the study, ISO 9001:2015, which is planned to be put in force and exists as a draft, was studied and its differences from the previous standard, ISO 9001:2008, were determined. To find out the differences, a survey was conducted among enterprises that implement a quality system. According to the findings obtained at the end of the study, it was observed that the enterprises attach importance to quality and follow the developments about quality management system, and they find the changes in the new draft document necessary.

Keywords: ISO 9001, quality, quality management system, quality revision

Procedia PDF Downloads 251
12007 Basic Study of Mammographic Image Magnification System with Eye-Detector and Simple EEG Scanner

Authors: Aika Umemuro, Mitsuru Sato, Mizuki Narita, Saya Hori, Saya Sakurai, Tomomi Nakayama, Ayano Nakazawa, Toshihiro Ogura

Abstract:

Mammography requires the detection of very small calcifications, and physicians search for microcalcifications by magnifying the images as they read them. The mouse is necessary to zoom in on the images, but this can be tiring and distracting when many images are read in a single day. Therefore, an image magnification system combining an eye-detector and a simple electroencephalograph (EEG) scanner was devised, and its operability was evaluated. Two experiments were conducted in this study: the measurement of eye-detection error using an eye-detector and the measurement of the time required for image magnification using a simple EEG scanner. Eye-detector validation showed that the mean distance of eye-detection error ranged from 0.64 cm to 2.17 cm, with an overall mean of 1.24 ± 0.81 cm for the observers. The results showed that the eye detection error was small enough for the magnified area of the mammographic image. The average time required for point magnification in the verification of the simple EEG scanner ranged from 5.85 to 16.73 seconds, and individual differences were observed. The reason for this may be that the size of the simple EEG scanner used was not adjustable, so it did not fit well for some subjects. The use of a simple EEG scanner with size adjustment would solve this problem. Therefore, the image magnification system using the eye-detector and the simple EEG scanner is useful.

Keywords: EEG scanner, eye-detector, mammography, observers

Procedia PDF Downloads 217
12006 Qualitative Meta-analysis of ICT4D Implementations

Authors: Miftah Hassen Jemal, Solomon Negash

Abstract:

This study focuses on undertaking a qualitative meta-analysis of qualitative studies conducted on ICT4D implementations. The interpretive approach of synthesis of the interpretation of qualitative studies is adopted to guide the whole process of the study. The traditional criteria of trustworthiness of qualitative studies in terms of transferability, consistency, and credibility are used as quality metrics of the output of the interpretive synthesis process. The findings of the study are anticipated to be of value for policymakers in providing guidance to decisions related to ICT4D implementations. The study is also anticipated to have contributions to research by extracting valuable insights from extant literature and identifying potential areas that warrant further investigation.

Keywords: ICT4D implementations, interpretive synthesis, qualitative meta-analysis, qualitative studies

Procedia PDF Downloads 159