Search results for: image quality measurement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13664

Search results for: image quality measurement

13484 Understanding the Influence of Social Media on Individual’s Quality of Life Perceptions

Authors: Biljana Marković

Abstract:

Social networks are an integral part of our everyday lives, becoming an indispensable medium for communication in personal and business environments. New forms and ways of communication change the general mindset and significantly affect the quality of life of individuals. Quality of life is perceived as an abstract term, but often people are not aware that they directly affect the quality of their own lives, making minor but significant everyday choices and decisions. Quality of life can be defined broadly, but in the widest sense, it involves a subjective sense of satisfaction with one's life. Scientific knowledge about the impact of social networks on self-assessment of the quality of life of individuals is only just beginning to be researched. Available research indicates potential benefits as well as a number of disadvantages. In the context of the previous claims, the focus of the study conducted by the authors of this paper focuses on analyzing the impact of social networks on individual’s self-assessment of quality of life and the correlation between time spent on social networks, and the choice of content that individuals choose to share to present themselves. Moreover, it is aimed to explain how much and in what ways they critically judge the lives of others online. The research aspires to show the positive as well as negative aspects that social networks, primarily Facebook and Instagram, have on creating a picture of individuals and how they compare themselves with others. The topic of this paper is based on quantitative research conducted on a representative sample. An analysis of the results of the survey conducted online has elaborated a hypothesis which claims that content shared by individuals on social networks influences the image they create about themselves. A comparative analysis of the results obtained with the results of similar research has led to the conclusion about the synergistic influence of social networks on the feeling of the quality of life of respondents. The originality of this work is reflected in the approach of conducting research by examining attitudes about an individual's life satisfaction, the way he or she creates a picture of himself/herself through social networks, the extent to which he/she compares herself/himself with others, and what social media applications he/she uses. At the cognitive level, scientific contributions were made through the development of information concepts on quality of life, and at the methodological level through the development of an original methodology for qualitative alignment of respondents' attitudes using statistical analysis. Furthermore, at the practical level through the application of concepts in assessing the creation of self-image and the image of others through social networks.

Keywords: quality of life, social media, self image, influence of social media

Procedia PDF Downloads 103
13483 Lisbon Experience, Mobility, Quality of Life and Tourist Image: A Survey

Authors: Luca Zarrilli, Miguel Brito, Marianna Cappucci

Abstract:

Tourists recently awarded Lisbon as the best city break destination in Europe. This article analyses the various types of tourist experiences in the city of Lisbon. The research method is the questionnaire, aimed at investigating the choices of tourists in the area of mobility, their perception of the quality of life and their level of appreciation of neighbourhoods, landmarks and infrastructures. There is an obvious link between the quality of life and the quality of the tourist experience, but it is difficult to measure it. Through this questionnaire, we hope to have made a small contribution to the understanding of the perceptive sphere of the individual and his choices in terms of behaviour, which is an essential element of any strategy for tourism marketing.

Keywords: Lisbon, mobility, quality of life, perception, tourism, hospitality

Procedia PDF Downloads 383
13482 Determinants of Customer Satisfaction: The case of Abyssinia Bank Customers in Addis Ababa Ethiopia

Authors: Yosef Ferede Bogale

Abstract:

The purpose of this study was to evaluate the degree of customer satisfaction and the variables influencing it in the instance of the Bank of Abyssinia branches in the districts of Arada and Bole in Addis Ababa. The study was carried out utilizing a mixed research approach and a descriptive and explanatory research design in Addis Ababa, the capital city of Ethiopia. Both primary and secondary data were employed in this investigation. The study's target population consisted of 1000 of the bank's most prestigious clients. With a 93% response rate, 265 respondents from both genders in the active age group had higher levels of education and work experience and were in the active age group. Customers of the case bank under consideration comprised the study's target audience. The respondents, who belonged to both gender groups, were in the active age bracket with superior levels of education and work experience. As a result, this investigation discovered that the degree of client satisfaction was assigned a medium rating. Additionally given a middling rating were the company's image practices, employee competency, technology, and service quality. Further, the results also demonstrate that corporate image, employees’ competency, technology, and service quality all positively and significantly affect customer happiness. This study found that, to varying degrees, company image, technology, competence, and high-quality financial services will all improve consumer happiness. According to this report, banks should monitor customer satisfaction and service quality at least twice a year. This is because there is a growing movement among bank service providers for accountability, and measuring these factors is crucial. This study also recommends that banks make every effort to satisfy consumers' expectations to the highest level.

Keywords: customer satisfaction, corporate image, quality service risk, banks

Procedia PDF Downloads 42
13481 Automated Ultrasound Carotid Artery Image Segmentation Using Curvelet Threshold Decomposition

Authors: Latha Subbiah, Dhanalakshmi Samiappan

Abstract:

In this paper, we propose denoising Common Carotid Artery (CCA) B mode ultrasound images by a decomposition approach to curvelet thresholding and automatic segmentation of the intima media thickness and adventitia boundary. By decomposition, the local geometry of the image, its direction of gradients are well preserved. The components are combined into a single vector valued function, thus removes noise patches. Double threshold is applied to inherently remove speckle noise in the image. The denoised image is segmented by active contour without specifying seed points. Combined with level set theory, they provide sub regions with continuous boundaries. The deformable contours match to the shapes and motion of objects in the images. A curve or a surface under constraints is developed from the image with the goal that it is pulled into the necessary features of the image. Region based and boundary based information are integrated to achieve the contour. The method treats the multiplicative speckle noise in objective and subjective quality measurements and thus leads to better-segmented results. The proposed denoising method gives better performance metrics compared with other state of art denoising algorithms.

Keywords: curvelet, decomposition, levelset, ultrasound

Procedia PDF Downloads 309
13480 International Financial Reporting Standards and the Quality of Banks Financial Statement Information: Evidence from an Emerging Market-Nigeria

Authors: Ugbede Onalo, Mohd Lizam, Ahmad Kaseri, Otache Innocent

Abstract:

Giving the paucity of studies on IFRS adoption and quality of banks accounting quality, particularly in emerging economies, this study is motivated to investigate whether the Nigeria decision to adopt IFRS beginning from 1 January 2012 is associated with high quality accounting measures. Consistent with prior literatures, this study measure quality of financial statement information using earnings measurement, timeliness of loss recognition and value relevance. A total of twenty Nigeria banks covering a period of six years (2008-2013) divided equally into three years each (2008, 2009, 2010) pre adoption period and (2011, 2012, 2013) post adoption period were investigated. Following prior studies eight models were in all employed to investigate earnings management, timeliness of loss recognition and value relevance of Nigeria bank accounting quality for the different reporting regimes. Results suggest that IFRS adoption is associated with minimal earnings management, timely recognition of losses and high value relevance of accounting information. Summarily, IFRS adoption engenders higher quality of banks financial statement information compared to local GAAP. Hence, this study recommends the global adoption of IFRS and that Nigeria banks should embrace good corporate governance practices.

Keywords: IFRS, SAS, quality of accounting information, earnings measurement, discretionary accruals, non-discretionary accruals, total accruals, Jones model, timeliness of loss recognition, value relevance

Procedia PDF Downloads 433
13479 Determinants of Customer Satisfaction: The Case of Abyssinia Bank Customers in Addis Ababa Ethiopia

Authors: Yosef Ferede Bogale

Abstract:

The purpose of this study was to evaluate the degree of customer satisfaction and the variables influencing it in the instance of the Bank of Abyssinia branches in the districts of Arada and Bole in Addis Ababa. The study was carried out utilizing a mixed research approach and a descriptive and explanatory research design in Addis Ababa, the capital city of Ethiopia. Both primary and secondary data were employed in this investigation. The study's target population consisted of 1000 of the bank's most prestigious clients. With a 93% response rate, 265 respondents from both genders in the active age group had higher levels of education and work experience and were in the active age group. Customers of the case bank under consideration comprised the study's target audience. The respondents, who belonged to both gender groups, were in the active age bracket with superior levels of education and work experience. As a result, this investigation discovered that the degree of client satisfaction was assigned a medium rating. Additionally given a middling rating were the company's image practices, employee competency, technology, and service quality. Further, the results also demonstrate that corporate image, employees’ competency, technology, and service quality all positively and significantly affect customer happiness. This study found that, to varying degrees, company image, technology, competence, and high-quality financial services will all improve consumer happiness. According to this report, banks should monitor customer satisfaction and service quality at least twice a year. This is because there is a growing movement among bank service providers for accountability, and measuring these factors is crucial. This study also recommends that banks make every effort to satisfy consumers' expectations to the highest level.

Keywords: customer satisfaction, corporate image, quality services risk, bank

Procedia PDF Downloads 26
13478 Image Classification with Localization Using Convolutional Neural Networks

Authors: Bhuyain Mobarok Hossain

Abstract:

Image classification and localization research is currently an important strategy in the field of computer vision. The evolution and advancement of deep learning and convolutional neural networks (CNN) have greatly improved the capabilities of object detection and image-based classification. Target detection is important to research in the field of computer vision, especially in video surveillance systems. To solve this problem, we will be applying a convolutional neural network of multiple scales at multiple locations in the image in one sliding window. Most translation networks move away from the bounding box around the area of interest. In contrast to this architecture, we consider the problem to be a classification problem where each pixel of the image is a separate section. Image classification is the method of predicting an individual category or specifying by a shoal of data points. Image classification is a part of the classification problem, including any labels throughout the image. The image can be classified as a day or night shot. Or, likewise, images of cars and motorbikes will be automatically placed in their collection. The deep learning of image classification generally includes convolutional layers; the invention of it is referred to as a convolutional neural network (CNN).

Keywords: image classification, object detection, localization, particle filter

Procedia PDF Downloads 266
13477 'Low Electronic Noise' Detector Technology in Computed Tomography

Authors: A. Ikhlef

Abstract:

Image noise in computed tomography, is mainly caused by the statistical noise, system noise reconstruction algorithm filters. Since last few years, low dose x-ray imaging became more and more desired and looked as a technical differentiating technology among CT manufacturers. In order to achieve this goal, several technologies and techniques are being investigated, including both hardware (integrated electronics and photon counting) and software (artificial intelligence and machine learning) based solutions. From a hardware point of view, electronic noise could indeed be a potential driver for low and ultra-low dose imaging. We demonstrated that the reduction or elimination of this term could lead to a reduction of dose without affecting image quality. Also, in this study, we will show that we can achieve this goal using conventional electronics (low cost and affordable technology), designed carefully and optimized for maximum detective quantum efficiency. We have conducted the tests using large imaging objects such as 30 cm water and 43 cm polyethylene phantoms. We compared the image quality with conventional imaging protocols with radiation as low as 10 mAs (<< 1 mGy). Clinical validation of such results has been performed as well.

Keywords: computed tomography, electronic noise, scintillation detector, x-ray detector

Procedia PDF Downloads 96
13476 Detecting the Edge of Multiple Images in Parallel

Authors: Prakash K. Aithal, U. Dinesh Acharya, Rajesh Gopakumar

Abstract:

Edge is variation of brightness in an image. Edge detection is useful in many application areas such as finding forests, rivers from a satellite image, detecting broken bone in a medical image etc. The paper discusses about finding edge of multiple aerial images in parallel .The proposed work tested on 38 images 37 colored and one monochrome image. The time taken to process N images in parallel is equivalent to time taken to process 1 image in sequential. The proposed method achieves pixel level parallelism as well as image level parallelism.

Keywords: edge detection, multicore, gpu, opencl, mpi

Procedia PDF Downloads 444
13475 New Method to Increase Contrast of Electromicrograph of Rat Tissues Sections

Authors: Lise Paule Labéjof, Raíza Sales Pereira Bizerra, Galileu Barbosa Costa, Thaísa Barros dos Santos

Abstract:

Since the beginning of the microscopy, improving the image quality has always been a concern of its users. Especially for transmission electron microscopy (TEM), the problem is even more important due to the complexity of the sample preparation technique and the many variables that can affect the conservation of structures, proper operation of the equipment used and then the quality of the images obtained. Animal tissues being transparent it is necessary to apply a contrast agent in order to identify the elements of their ultrastructural morphology. Several methods of contrastation of tissues for TEM imaging have already been developed. The most used are the “in block” contrastation and “in situ” contrastation. This report presents an alternative technique of application of contrast agent in vivo, i.e. before sampling. By this new method the electromicrographies of the tissue sections have better contrast compared to that in situ and present no artefact of precipitation of contrast agent. Another advantage is that a small amount of contrast is needed to get a good result given that most of them are expensive and extremely toxic.

Keywords: image quality, microscopy research, staining technique, ultra thin section

Procedia PDF Downloads 402
13474 Speeding-up Gray-Scale FIC by Moments

Authors: Eman A. Al-Hilo, Hawraa H. Al-Waelly

Abstract:

In this work, fractal compression (FIC) technique is introduced based on using moment features to block indexing the zero-mean range-domain blocks. The moment features have been used to speed up the IFS-matching stage. Its moments ratio descriptor is used to filter the domain blocks and keep only the blocks that are suitable to be IFS matched with tested range block. The results of tests conducted on Lena picture and Cat picture (256 pixels, resolution 24 bits/pixel) image showed a minimum encoding time (0.89 sec for Lena image and 0.78 of Cat image) with appropriate PSNR (30.01dB for Lena image and 29.8 of Cat image). The reduction in ET is about 12% for Lena and 67% for Cat image.

Keywords: fractal gray level image, fractal compression technique, iterated function system, moments feature, zero-mean range-domain block

Procedia PDF Downloads 469
13473 Pilot-free Image Transmission System of Joint Source Channel Based on Multi-Level Semantic Information

Authors: Linyu Wang, Liguo Qiao, Jianhong Xiang, Hao Xu

Abstract:

In semantic communication, the existing joint Source Channel coding (JSCC) wireless communication system without pilot has unstable transmission performance and can not effectively capture the global information and location information of images. In this paper, a pilot-free image transmission system of joint source channel based on multi-level semantic information (Multi-level JSCC) is proposed. The transmitter of the system is composed of two networks. The feature extraction network is used to extract the high-level semantic features of the image, compress the information transmitted by the image, and improve the bandwidth utilization. Feature retention network is used to preserve low-level semantic features and image details to improve communication quality. The receiver also is composed of two networks. The received high-level semantic features are fused with the low-level semantic features after feature enhancement network in the same dimension, and then the image dimension is restored through feature recovery network, and the image location information is effectively used for image reconstruction. This paper verifies that the proposed multi-level JSCC algorithm can effectively transmit and recover image information in both AWGN channel and Rayleigh fading channel, and the peak signal-to-noise ratio (PSNR) is improved by 1~2dB compared with other algorithms under the same simulation conditions.

Keywords: deep learning, JSCC, pilot-free picture transmission, multilevel semantic information, robustness

Procedia PDF Downloads 88
13472 Assessment of Kinetic Trajectory of the Median Nerve from Wrist Ultrasound Images Using Two Dimensional Baysian Speckle Tracking Technique

Authors: Li-Kai Kuo, Shyh-Hau Wang

Abstract:

The kinetic trajectory of the median nerve (MN) in the wrist has shown to be capable of being applied to assess the carpal tunnel syndrome (CTS), and was found able to be detected by high-frequency ultrasound image via motion tracking technique. Yet, previous study may not quickly perform the measurement due to the use of a single element transducer for ultrasound image scanning. Therefore, previous system is not appropriate for being applied to clinical application. In the present study, B-mode ultrasound images of the wrist corresponding to movements of fingers from flexion to extension were acquired by clinical applicable real-time scanner. The kinetic trajectories of MN were off-line estimated utilizing two dimensional Baysian speckle tracking (TDBST) technique. The experiments were carried out from ten volunteers by ultrasound scanner at 12 MHz frequency. Results verified from phantom experiments have demonstrated that TDBST technique is able to detect the movement of MN based on signals of the past and present information and then to reduce the computational complications associated with the effect of such image quality as the resolution and contrast variations. Moreover, TDBST technique tended to be more accurate than that of the normalized cross correlation tracking (NCCT) technique used in previous study to detect movements of the MN in the wrist. In response to fingers’ flexion movement, the kinetic trajectory of the MN moved toward the ulnar-palmar direction, and then toward the radial-dorsal direction corresponding to the extensional movement. TDBST technique and the employed ultrasound image scanner have verified to be feasible to sensitively detect the kinetic trajectory and displacement of the MN. It thus could be further applied to diagnose CTS clinically and to improve the measurements to assess 3D trajectory of the MN.

Keywords: baysian speckle tracking, carpal tunnel syndrome, median nerve, motion tracking

Procedia PDF Downloads 465
13471 Digital Image Forensics: Discovering the History of Digital Images

Authors: Gurinder Singh, Kulbir Singh

Abstract:

Digital multimedia contents such as image, video, and audio can be tampered easily due to the availability of powerful editing softwares. Multimedia forensics is devoted to analyze these contents by using various digital forensic techniques in order to validate their authenticity. Digital image forensics is dedicated to investigate the reliability of digital images by analyzing the integrity of data and by reconstructing the historical information of an image related to its acquisition phase. In this paper, a survey is carried out on the forgery detection by considering the most recent and promising digital image forensic techniques.

Keywords: Computer Forensics, Multimedia Forensics, Image Ballistics, Camera Source Identification, Forgery Detection

Procedia PDF Downloads 213
13470 Performance Evaluation of a Very High-Resolution Satellite Telescope

Authors: Walid A. Attia, Taher M. Bazan, Fawzy Eltohamy, Mahmoud Fathy

Abstract:

System performance evaluation is an essential stage in the design of high-resolution satellite telescopes prior to the development process. In this paper, a system performance evaluation of a very high-resolution satellite telescope is investigated. The evaluated system has a Korsch optical scheme design. This design has been discussed in another paper with respect to three-mirror anastigmat (TMA) scheme design and the former configuration showed better results. The investigated system is based on the Korsch optical design integrated with a time-delay and integration charge coupled device (TDI-CCD) sensor to achieve a ground sampling distance (GSD) of 25 cm. The key performance metrics considered are the spatial resolution, the signal to noise ratio (SNR) and the total modulation transfer function (MTF) of the system. In addition, the national image interpretability rating scale (NIIRS) metric is assessed to predict the image quality according to the modified general image quality equation (GIQE). Based on the orbital, optical and detector parameters, the estimated GSD is found to be 25 cm. The SNR has been analyzed at different illumination conditions of target albedos, sun and sensor angles. The system MTF has been computed including diffraction, aberration, optical manufacturing, smear and detector sampling as the main contributors for evaluation the MTF. Finally, the system performance evaluation results show that the computed MTF value is found to be around 0.08 at the Nyquist frequency, the SNR value was found to be 130 at albedo 0.2 with a nadir viewing angles and the predicted NIIRS is in the order of 6.5 which implies a very good system image quality.

Keywords: modulation transfer function, national image interpretability rating scale, signal to noise ratio, satellite telescope performance evaluation

Procedia PDF Downloads 351
13469 Gray Level Image Encryption

Authors: Roza Afarin, Saeed Mozaffari

Abstract:

The aim of this paper is image encryption using Genetic Algorithm (GA). The proposed encryption method consists of two phases. In modification phase, pixels locations are altered to reduce correlation among adjacent pixels. Then, pixels values are changed in the diffusion phase to encrypt the input image. Both phases are performed by GA with binary chromosomes. For modification phase, these binary patterns are generated by Local Binary Pattern (LBP) operator while for diffusion phase binary chromosomes are obtained by Bit Plane Slicing (BPS). Initial population in GA includes rows and columns of the input image. Instead of subjective selection of parents from this initial population, a random generator with predefined key is utilized. It is necessary to decrypt the coded image and reconstruct the initial input image. Fitness function is defined as average of transition from 0 to 1 in LBP image and histogram uniformity in modification and diffusion phases, respectively. Randomness of the encrypted image is measured by entropy, correlation coefficients and histogram analysis. Experimental results show that the proposed method is fast enough and can be used effectively for image encryption.

Keywords: correlation coefficients, genetic algorithm, image encryption, image entropy

Procedia PDF Downloads 302
13468 A Gauge Repeatability and Reproducibility Study for Multivariate Measurement Systems

Authors: Jeh-Nan Pan, Chung-I Li

Abstract:

Measurement system analysis (MSA) plays an important role in helping organizations to improve their product quality. Generally speaking, the gauge repeatability and reproducibility (GRR) study is performed according to the MSA handbook stated in QS9000 standards. Usually, GRR study for assessing the adequacy of gauge variation needs to be conducted prior to the process capability analysis. Traditional MSA only considers a single quality characteristic. With the advent of modern technology, industrial products have become very sophisticated with more than one quality characteristic. Thus, it becomes necessary to perform multivariate GRR analysis for a measurement system when collecting data with multiple responses. In this paper, we take the correlation coefficients among tolerances into account to revise the multivariate precision-to-tolerance (P/T) ratio as proposed by Majeske (2008). We then compare the performance of our revised P/T ratio with that of the existing ratios. The simulation results show that our revised P/T ratio outperforms others in terms of robustness and proximity to the actual value. Moreover, the optimal allocation of several parameters such as the number of quality characteristics (v), sample size of parts (p), number of operators (o) and replicate measurements (r) is discussed using the confidence interval of the revised P/T ratio. Finally, a standard operating procedure (S.O.P.) to perform the GRR study for multivariate measurement systems is proposed based on the research results. Hopefully, it can be served as a useful reference for quality practitioners when conducting such study in industries. Measurement system analysis (MSA) plays an important role in helping organizations to improve their product quality. Generally speaking, the gauge repeatability and reproducibility (GRR) study is performed according to the MSA handbook stated in QS9000 standards. Usually, GRR study for assessing the adequacy of gauge variation needs to be conducted prior to the process capability analysis. Traditional MSA only considers a single quality characteristic. With the advent of modern technology, industrial products have become very sophisticated with more than one quality characteristic. Thus, it becomes necessary to perform multivariate GRR analysis for a measurement system when collecting data with multiple responses. In this paper, we take the correlation coefficients among tolerances into account to revise the multivariate precision-to-tolerance (P/T) ratio as proposed by Majeske (2008). We then compare the performance of our revised P/T ratio with that of the existing ratios. The simulation results show that our revised P/T ratio outperforms others in terms of robustness and proximity to the actual value. Moreover, the optimal allocation of several parameters such as the number of quality characteristics (v), sample size of parts (p), number of operators (o) and replicate measurements (r) is discussed using the confidence interval of the revised P/T ratio. Finally, a standard operating procedure (S.O.P.) to perform the GRR study for multivariate measurement systems is proposed based on the research results. Hopefully, it can be served as a useful reference for quality practitioners when conducting such study in industries.

Keywords: gauge repeatability and reproducibility, multivariate measurement system analysis, precision-to-tolerance ratio, Gauge repeatability

Procedia PDF Downloads 224
13467 Image Segmentation Techniques: Review

Authors: Lindani Mbatha, Suvendi Rimer, Mpho Gololo

Abstract:

Image segmentation is the process of dividing an image into several sections, such as the object's background and the foreground. It is a critical technique in both image-processing tasks and computer vision. Most of the image segmentation algorithms have been developed for gray-scale images and little research and algorithms have been developed for the color images. Most image segmentation algorithms or techniques vary based on the input data and the application. Nearly all of the techniques are not suitable for noisy environments. Most of the work that has been done uses the Markov Random Field (MRF), which involves the computations and is said to be robust to noise. In the past recent years' image segmentation has been brought to tackle problems such as easy processing of an image, interpretation of the contents of an image, and easy analysing of an image. This article reviews and summarizes some of the image segmentation techniques and algorithms that have been developed in the past years. The techniques include neural networks (CNN), edge-based techniques, region growing, clustering, and thresholding techniques and so on. The advantages and disadvantages of medical ultrasound image segmentation techniques are also discussed. The article also addresses the applications and potential future developments that can be done around image segmentation. This review article concludes with the fact that no technique is perfectly suitable for the segmentation of all different types of images, but the use of hybrid techniques yields more accurate and efficient results.

Keywords: clustering-based, convolution-network, edge-based, region-growing

Procedia PDF Downloads 55
13466 A Machine Learning Approach for the Leakage Classification in the Hydraulic Final Test

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

The widespread use of machine learning applications in production is significantly accelerated by improved computing power and increasing data availability. Predictive quality enables the assurance of product quality by using machine learning models as a basis for decisions on test results. The use of real Bosch production data based on geometric gauge blocks from machining, mating data from assembly and hydraulic measurement data from final testing of directional valves is a promising approach to classifying the quality characteristics of workpieces.

Keywords: machine learning, classification, predictive quality, hydraulics, supervised learning

Procedia PDF Downloads 162
13465 A Simple and Efficient Method for Accurate Measurement and Control of Power Frequency Deviation

Authors: S. J. Arif

Abstract:

In the presented technique, a simple method is given for accurate measurement and control of power frequency deviation. The sinusoidal signal for which the frequency deviation measurement is required is transformed to a low voltage level and passed through a zero crossing detector to convert it into a pulse train. Another stable square wave signal of 10 KHz is obtained using a crystal oscillator and decade dividing assemblies (DDA). These signals are combined digitally and then passed through decade counters to give a unique combination of pulses or levels, which are further encoded to make them equally suitable for both control applications and display units. The developed circuit using discrete components has a resolution of 0.5 Hz and completes measurement within 20 ms. The realized circuit is simulated and synthesized using Verilog HDL and subsequently implemented on FPGA. The results of measurement on FPGA are observed on a very high resolution logic analyzer. These results accurately match the simulation results as well as the results of same circuit implemented with discrete components. The proposed system is suitable for accurate measurement and control of power frequency deviation.

Keywords: digital encoder for frequency measurement, frequency deviation measurement, measurement and control systems, power systems

Procedia PDF Downloads 347
13464 Improvement Image Summarization using Image Processing and Particle swarm optimization Algorithm

Authors: Hooman Torabifard

Abstract:

In the last few years, with the progress of technology and computers and artificial intelligence entry into all kinds of scientific and industrial fields, the lifestyles of human life have changed and in general, the way of humans live on earth has many changes and development. Until now, some of the changes has occurred in the context of digital images and image processing and still continues. However, besides all the benefits, there have been disadvantages. One of these disadvantages is the multiplicity of images with high volume and data; the focus of this paper is on improving and developing a method for summarizing and enhancing the productivity of these images. The general method used for this purpose in this paper consists of a set of methods based on data obtained from image processing and using the PSO (Particle swarm optimization) algorithm. In the remainder of this paper, the method used is elaborated in detail.

Keywords: image summarization, particle swarm optimization, image threshold, image processing

Procedia PDF Downloads 102
13463 Improved Super-Resolution Using Deep Denoising Convolutional Neural Network

Authors: Pawan Kumar Mishra, Ganesh Singh Bisht

Abstract:

Super-resolution is the technique that is being used in computer vision to construct high-resolution images from a single low-resolution image. It is used to increase the frequency component, recover the lost details and removing the down sampling and noises that caused by camera during image acquisition process. High-resolution images or videos are desired part of all image processing tasks and its analysis in most of digital imaging application. The target behind super-resolution is to combine non-repetition information inside single or multiple low-resolution frames to generate a high-resolution image. Many methods have been proposed where multiple images are used as low-resolution images of same scene with different variation in transformation. This is called multi-image super resolution. And another family of methods is single image super-resolution that tries to learn redundancy that presents in image and reconstruction the lost information from a single low-resolution image. Use of deep learning is one of state of art method at present for solving reconstruction high-resolution image. In this research, we proposed Deep Denoising Super Resolution (DDSR) that is a deep neural network for effectively reconstruct the high-resolution image from low-resolution image.

Keywords: resolution, deep-learning, neural network, de-blurring

Procedia PDF Downloads 484
13462 Abdominal Organ Segmentation in CT Images Based On Watershed Transform and Mosaic Image

Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid

Abstract:

Accurate Liver, spleen and kidneys segmentation in abdominal CT images is one of the most important steps for computer aided abdominal organs pathology diagnosis. In this paper, we have proposed a new semi-automatic algorithm for Liver, spleen and kidneys area extraction in abdominal CT images. Our proposed method is based on hierarchical segmentation and watershed algorithm. In our approach, a powerful technique has been designed to suppress over-segmentation based on mosaic image and on the computation of the watershed transform. The algorithm is currency in two parts. In the first, we seek to improve the quality of the gradient-mosaic image. In this step, we propose a method for improving the gradient-mosaic image by applying the anisotropic diffusion filter followed by the morphological filters. Thereafter we proceed to the hierarchical segmentation of the liver, spleen and kidney. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work.

Keywords: anisotropic diffusion filter, CT images, morphological filter, mosaic image, multi-abdominal organ segmentation, mosaic image, the watershed algorithm

Procedia PDF Downloads 464
13461 High Secure Data Hiding Using Cropping Image and Least Significant Bit Steganography

Authors: Khalid A. Al-Afandy, El-Sayyed El-Rabaie, Osama Salah, Ahmed El-Mhalaway

Abstract:

This paper presents a high secure data hiding technique using image cropping and Least Significant Bit (LSB) steganography. The predefined certain secret coordinate crops will be extracted from the cover image. The secret text message will be divided into sections. These sections quantity is equal the image crops quantity. Each section from the secret text message will embed into an image crop with a secret sequence using LSB technique. The embedding is done using the cover image color channels. Stego image is given by reassembling the image and the stego crops. The results of the technique will be compared to the other state of art techniques. Evaluation is based on visualization to detect any degradation of stego image, the difficulty of extracting the embedded data by any unauthorized viewer, Peak Signal-to-Noise Ratio of stego image (PSNR), and the embedding algorithm CPU time. Experimental results ensure that the proposed technique is more secure compared with the other traditional techniques.

Keywords: steganography, stego, LSB, crop

Procedia PDF Downloads 240
13460 Assessing Image Quality in Mobile Radiography: A Phantom-Based Evaluation of a New Lightweight Mobile X-Ray Equipment

Authors: May Bazzi, Shafik Tokmaj, Younes Saberi, Mats Geijer, Tony Jurkiewicz, Patrik Sund, Anna Bjällmark

Abstract:

Mobile radiography, employing portable X-ray equipment, has become a routine procedure within hospital settings, with chest X-rays in intensive care units standing out as the most prevalent mobile X-ray examinations. This approach is not limited to hospitals alone, as it extends its benefits to imaging patients in various settings, particularly those too frail to be transported, such as elderly care residents in nursing homes. Moreover, the utility of mobile X-ray isn't confined solely to traditional healthcare recipients; it has proven to be a valuable resource for vulnerable populations, including the homeless, drug users, asylum seekers, and patients with multiple co-morbidities. Mobile X-rays reduce patient stress, minimize costly hospitalizations, and offer cost-effective imaging. While studies confirm its reliability, further research is needed, especially regarding image quality. Recent advancements in lightweight equipment with enhanced battery and detector technology provide the potential for nearly handheld radiography. The main aim of this study was to evaluate a new lightweight mobile X-ray system with two different detectors and compare the image quality with a modern stationary system. Methods: A total of 74 images of the chest (chest anterior-posterior (AP) views and chest lateral views) and pelvic/hip region (AP pelvis views, hip AP views, and hip cross-table lateral views) were acquired on a whole-body phantom (Kyotokagaku, Japan), utilizing varying image parameters. These images were obtained using a stationary system - 18 images (Mediel, Sweden), a mobile X-ray system with a second-generation detector - 28 images (FDR D-EVO II; Fujifilm, Japan) and a mobile X-ray system with a third-generation detector - 28 images (FDR D-EVO III; Fujifilm, Japan). Image quality was assessed by visual grading analysis (VGA), which is a method to measure image quality by assessing the visibility and accurate reproduction of anatomical structures within the images. A total of 33 image criteria were used in the analysis. A panel of two experienced radiologists, two experienced radiographers, and two final-term radiographer students evaluated the image quality on a 5-grade ordinal scale using the software Viewdex 3.0 (Viewer for Digital Evaluation of X-ray images, Sweden). Data were analyzed using visual grading characteristics analysis. The dose was measured by the dose-area product (DAP) reported by the respective systems. Results: The mobile X-ray equipment (both detectors) showed significantly better image quality than the stationary equipment for the pelvis, hip AP and hip cross-table lateral images with AUCVGA-values ranging from 0.64-0.92, while chest images showed mixed results. The number of images rated as having sufficient quality for diagnostic use was significantly higher for mobile X-ray generation 2 and 3 compared with the stationary X-ray system. The DAP values were higher for the stationary compared to the mobile system. Conclusions: The new lightweight radiographic equipment had an image quality at least as good as a fixed system at a lower radiation dose. Future studies should focus on clinical images and consider radiographers' viewpoints for a comprehensive assessment.

Keywords: mobile x-ray, visual grading analysis, radiographer, radiation dose

Procedia PDF Downloads 31
13459 Secure E-Pay System Using Steganography and Visual Cryptography

Authors: K. Suganya Devi, P. Srinivasan, M. P. Vaishnave, G. Arutperumjothi

Abstract:

Today’s internet world is highly prone to various online attacks, of which the most harmful attack is phishing. The attackers host the fake websites which are very similar and look alike. We propose an image based authentication using steganography and visual cryptography to prevent phishing. This paper presents a secure steganographic technique for true color (RGB) images and uses Discrete Cosine Transform to compress the images. The proposed method hides the secret data inside the cover image. The use of visual cryptography is to preserve the privacy of an image by decomposing the original image into two shares. Original image can be identified only when both qualified shares are simultaneously available. Individual share does not reveal the identity of the original image. Thus, the existence of the secret message is hard to be detected by the RS steganalysis.

Keywords: image security, random LSB, steganography, visual cryptography

Procedia PDF Downloads 304
13458 The Residual Effects of Special Merchandising Sections on Consumers' Shopping Behavior

Authors: Shih-Ching Wang, Mark Lang

Abstract:

This paper examines the secondary effects and consequences of special displays on subsequent shopping behavior. Special displays are studied as a prominent form of in-store or shopper marketing activity. Two experiments are performed using special value and special quality-oriented displays in an online simulated store environment. The impact of exposure to special displays on mindsets and resulting product choices are tested in a shopping task. Impact on store image is also tested. The experiments find that special displays do trigger shopping mindsets that affect product choices and shopping basket composition and value. There are intended and unintended positive and negative effects found. Special value displays improve store price image but trigger a price sensitive shopping mindset that causes more lower-priced items to be purchased, lowering total basket dollar value. Special natural food displays improve store quality image and trigger a quality-oriented mindset that causes fewer lower-priced items to be purchased, increasing total basket dollar value. These findings extend the theories of product categorization, mind-sets, and price sensitivity found in communication research into the retail store environment. Findings also warn retailers to consider the total effects and consequences of special displays when designing and executing in-store or shopper marketing activity.

Keywords: special displays, mindset, shopping behavior, price consciousness, product categorization, store image

Procedia PDF Downloads 253
13457 Challenges and Insights by Electrical Characterization of Large Area Graphene Layers

Authors: Marcus Klein, Martina GrießBach, Richard Kupke

Abstract:

The current advances in the research and manufacturing of large area graphene layers are promising towards the introduction of this exciting material in the display industry and other applications that benefit from excellent electrical and optical characteristics. New production technologies in the fabrication of flexible displays, touch screens or printed electronics apply graphene layers on non-metal substrates and bring new challenges to the required metrology. Traditional measurement concepts of layer thickness, sheet resistance, and layer uniformity, are difficult to apply to graphene production processes and are often harmful to the product layer. New non-contact sensor concepts are required to adapt to the challenges and even the foreseeable inline production of large area graphene. Dedicated non-contact measurement sensors are a pioneering method to leverage these issues in a large variety of applications, while significantly lowering the costs of development and process setup. Transferred and printed graphene layers can be characterized with high accuracy in a huge measurement range using a very high resolution. Large area graphene mappings are applied for process optimization and for efficient quality control for transfer, doping, annealing and stacking processes. Examples of doped, defected and excellent Graphene are presented as quality images and implications for manufacturers are explained.

Keywords: graphene, doping and defect testing, non-contact sheet resistance measurement, inline metrology

Procedia PDF Downloads 278
13456 Red Green Blue Image Encryption Based on Paillier Cryptographic System

Authors: Mamadou I. Wade, Henry C. Ogworonjo, Madiha Gul, Mandoye Ndoye, Mohamed Chouikha, Wayne Patterson

Abstract:

In this paper, we present a novel application of the Paillier cryptographic system to the encryption of RGB (Red Green Blue) images. In this method, an RGB image is first separated into its constituent channel images, and the Paillier encryption function is applied to each of the channels pixel intensity values. Next, the encrypted image is combined and compressed if necessary before being transmitted through an unsecured communication channel. The transmitted image is subsequently recovered by a decryption process. We performed a series of security and performance analyses to the recovered images in order to verify their robustness to security attack. The results show that the proposed image encryption scheme produces highly secured encrypted images.

Keywords: image encryption, Paillier cryptographic system, RBG image encryption, Paillier

Procedia PDF Downloads 205
13455 Quality and Quality Assurance in Education: Examining the Possible Relationship

Authors: Rodoula Stavroula Gkarnara, Nikolaos Andreadakis

Abstract:

The purpose of this paper is to examine the relationship between quality and quality assurance in education. It constitutes a critical review of the bibliography regarding quality and its delimitation in the field of education, as well as the quality assurance in education and the approaches identified for its extensive study. The two prevailing and opposite views on the correlation of the two concepts are that on the one hand there is an inherent distance between these concepts as they are two separate terms and on the other hand they are interrelated and interdependent concepts that contribute to the improvement of quality in education. Finally, the last part of the paper, adopting the second view, refers to the contribution of quality assurance to quality, where it is pointed out that the first concept leads to the improvement of the latter by quality assurance being the means of feedback for the quality achieved.

Keywords: education, quality, quality assurance, quality improvement

Procedia PDF Downloads 184