Search results for: image schema
2285 The Effect of Closed Circuit Television Image Patch Layout on Performance of a Simulated Train-Platform Departure Task
Authors: Aaron J. Small, Craig A. Fletcher
Abstract:
This study investigates the effect of closed circuit television (CCTV) image patch layout on performance of a simulated train-platform departure task. The within-subjects experimental design measures target detection rate and response latency during a CCTV visual search task conducted as part of the procedure for safe train dispatch. Three interface designs were developed by manipulating CCTV image patch layout. Eye movements, perceived workload and system usability were measured across experimental conditions. Task performance was compared to identify significant differences between conditions. The results of this study have not been determined.Keywords: rail human factors, workload, closed circuit television, platform departure, attention, information processing, interface design
Procedia PDF Downloads 1662284 Contradictive Representation of Women in Postfeminist Japanese Media
Authors: Emiko Suzuki
Abstract:
Although some claim that we are in a post-feminist society, the word “postfeminism” still raises questions to many. In postfeminist media, as a British sociologist Rosalind Gill points out, on the one hand, it seems to promote an empowering image of women who are active, positively sexually motivated, has free will to make market choices, and have surveillance and discipline for their personality and body, yet on the other hand, such beautiful and attractive feminist image imposes stronger surveillance of their mind and body for women. Similar representation, which is that femininity is described in a contradictive way, is seen in Japanese media as well. This study tries to capture how post-feminist Japanese media is, contrary to its ostensible messages, encouraging women to join the obedience to the labor system by affirming the traditional image of attractive women using sexual objectification and promoting values of neoliberalism. The result shows an interesting insight into how Japanese media is creating a conflicting ideal representation of women through repeatedly exposing such images.Keywords: postfeminism, Japanese media, sexual objectification, embodiment
Procedia PDF Downloads 1952283 The Mediating Role of Bank Image in Customer Satisfaction Building
Abstract:
The main objective of this research was to determine the dimensions of service quality in the banking industry of Iran. For this purpose, the study empirically examined the European perspective suggesting that service quality consists of three dimensions, technical, functional and image. This research is an applied research and its strategy is casual strategy. A standard questionnaire was used for collecting the data. 287 customers of Melli Bank of Northwest were selected through cluster sampling and were studied. The results from a banking service sample revealed that the overall service quality is influenced more by a consumer’s perception of technical quality than functional quality. Accordingly, the Gronroos model is a more appropriate representation of service quality than the American perspective with its limited concentration on the dimension of functional quality in the banking industry of Iran. So, knowing the key dimensions of the quality of services in this industry and planning for their improvement can increase the satisfaction of customers and productivity of this industry.Keywords: technical quality, functional quality, banking, image, mediating role
Procedia PDF Downloads 3682282 Narrating 1968: Felipe Cazals’ Canoa (1976) and Images of Massacre
Authors: Nancy Elizabeth Naranjo Garcia
Abstract:
Canoa (1976) by Felipe Cazals is a film that exposes the consequences of power that the Mexican State exercised over the 1968 student movement. The film, in this particular way, approaches the Tlatelolco Massacre from a point of view that takes into consideration the events that led up to it. Nonetheless, the reference to the political tension in Canoa remains ambiguous. Thus, the cinematographic representation refers to an event that leaves space for reflection, and as a consequence leaves evidence of an image that signals the notion of survival as Georges Didi-Huberman points out. In addition to denouncing the oppressive force by the Mexican State, the images in Canoa also emphasize what did not happen in Tlatelolco and its condensation with the student activists. To observe the images that Canoa offers in a new light, this work proposes further exploration with the following questions; How do the images in Canoa narrate? How are the images inserted in the film? In this fashion, a more profound comprehension of the objective and the essence of the images becomes feasible. As a result, it is possible to analyze the images of Canoa with the real killing at San Miguel Canoa in literature. The film visualizes a testimony of the event that once seemed unimaginable, an image that anticipates and structures the proceeding event. Therefore, this study takes a second look at how Canoa considers not only the killing at San Miguel Canoa and the Tlatlelolco Massacre, but goes further on contextualize an unimaginable image.Keywords: cinematographic representation, student movement, Tlatelolco Massacre, unimaginable image
Procedia PDF Downloads 2182281 Reliable Soup: Reliable-Driven Model Weight Fusion on Ultrasound Imaging Classification
Authors: Shuge Lei, Haonan Hu, Dasheng Sun, Huabin Zhang, Kehong Yuan, Jian Dai, Yan Tong
Abstract:
It remains challenging to measure reliability from classification results from different machine learning models. This paper proposes a reliable soup optimization algorithm based on the model weight fusion algorithm Model Soup, aiming to improve reliability by using dual-channel reliability as the objective function to fuse a series of weights in the breast ultrasound classification models. Experimental results on breast ultrasound clinical datasets demonstrate that reliable soup significantly enhances the reliability of breast ultrasound image classification tasks. The effectiveness of the proposed approach was verified via multicenter trials. The results from five centers indicate that the reliability optimization algorithm can enhance the reliability of the breast ultrasound image classification model and exhibit low multicenter correlation.Keywords: breast ultrasound image classification, feature attribution, reliability assessment, reliability optimization
Procedia PDF Downloads 832280 Iris Detection on RGB Image for Controlling Side Mirror
Authors: Norzalina Othman, Nurul Na’imy Wan, Azliza Mohd Rusli, Wan Noor Syahirah Meor Idris
Abstract:
Iris detection is a process where the position of the eyes is extracted from the face images. It is a current method used for many applications such as for security purpose and drowsiness detection. This paper proposes the use of eyes detection in controlling side mirror of motor vehicles. The eyes detection method aims to make driver easy to adjust the side mirrors automatically. The system will determine the midpoint coordinate of eyes detection on RGB (color) image and the input signal from y-coordinate will send it to controller in order to rotate the angle of side mirror on vehicle. The eye position was cropped and the coordinate of midpoint was successfully detected from the circle of iris detection using Viola Jones detection and circular Hough transform methods on RGB image. The coordinate of midpoint from the experiment are tested using controller to determine the angle of rotation on the side mirrors.Keywords: iris detection, midpoint coordinates, RGB images, side mirror
Procedia PDF Downloads 4212279 Comparing Deep Architectures for Selecting Optimal Machine Translation
Authors: Despoina Mouratidis, Katia Lida Kermanidis
Abstract:
Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification
Procedia PDF Downloads 1302278 Fracture Crack Monitoring Using Digital Image Correlation Technique
Authors: B. G. Patel, A. K. Desai, S. G. Shah
Abstract:
The main of objective of this paper is to develop new measurement technique without touching the object. DIC is advance measurement technique use to measure displacement of particle with very high accuracy. This powerful innovative technique which is used to correlate two image segments to determine the similarity between them. For this study, nine geometrically similar beam specimens of different sizes with (steel fibers and glass fibers) and without fibers were tested under three-point bending in a closed loop servo-controlled machine with crack mouth opening displacement control with a rate of opening of 0.0005 mm/sec. Digital images were captured before loading (unreformed state) and at different instances of loading and were analyzed using correlation techniques to compute the surface displacements, crack opening and sliding displacements, load-point displacement, crack length and crack tip location. It was seen that the CMOD and vertical load-point displacement computed using DIC analysis matches well with those measured experimentally.Keywords: Digital Image Correlation, fibres, self compacting concrete, size effect
Procedia PDF Downloads 3872277 Nazca: A Context-Based Matching Method for Searching Heterogeneous Structures
Authors: Karine B. de Oliveira, Carina F. Dorneles
Abstract:
The structure level matching is the problem of combining elements of a structure, which can be represented as entities, classes, XML elements, web forms, and so on. This is a challenge due to large number of distinct representations of semantically similar structures. This paper describes a structure-based matching method applied to search for different representations in data sources, considering the similarity between elements of two structures and the data source context. Using real data sources, we have conducted an experimental study comparing our approach with our baseline implementation and with another important schema matching approach. We demonstrate that our proposal reaches higher precision than the baseline.Keywords: context, data source, index, matching, search, similarity, structure
Procedia PDF Downloads 3612276 Clothes Identification Using Inception ResNet V2 and MobileNet V2
Authors: Subodh Chandra Shakya, Badal Shrestha, Suni Thapa, Ashutosh Chauhan, Saugat Adhikari
Abstract:
To tackle our problem of clothes identification, we used different architectures of Convolutional Neural Networks. Among different architectures, the outcome from Inception ResNet V2 and MobileNet V2 seemed promising. On comparison of the metrices, we observed that the Inception ResNet V2 slightly outperforms MobileNet V2 for this purpose. So this paper of ours proposes the cloth identifier using Inception ResNet V2 and also contains the comparison between the outcome of ResNet V2 and MobileNet V2. The document here contains the results and findings of the research that we performed on the DeepFashion Dataset. To improve the dataset, we used different image preprocessing techniques like image shearing, image rotation, and denoising. The whole experiment was conducted with the intention of testing the efficiency of convolutional neural networks on cloth identification so that we could develop a reliable system that is good enough in identifying the clothes worn by the users. The whole system can be integrated with some kind of recommendation system.Keywords: inception ResNet, convolutional neural net, deep learning, confusion matrix, data augmentation, data preprocessing
Procedia PDF Downloads 1862275 Addressing the Exorbitant Cost of Labeling Medical Images with Active Learning
Authors: Saba Rahimi, Ozan Oktay, Javier Alvarez-Valle, Sujeeth Bharadwaj
Abstract:
Successful application of deep learning in medical image analysis necessitates unprecedented amounts of labeled training data. Unlike conventional 2D applications, radiological images can be three-dimensional (e.g., CT, MRI), consisting of many instances within each image. The problem is exacerbated when expert annotations are required for effective pixel-wise labeling, which incurs exorbitant labeling effort and cost. Active learning is an established research domain that aims to reduce labeling workload by prioritizing a subset of informative unlabeled examples to annotate. Our contribution is a cost-effective approach for U-Net 3D models that uses Monte Carlo sampling to analyze pixel-wise uncertainty. Experiments on the AAPM 2017 lung CT segmentation challenge dataset show that our proposed framework can achieve promising segmentation results by using only 42% of the training data.Keywords: image segmentation, active learning, convolutional neural network, 3D U-Net
Procedia PDF Downloads 1522274 Looking beyond Lynch's Image of a City
Authors: Sandhya Rao
Abstract:
Kevin Lynch’s Theory on Imeageability, let on explore a city in terms of five elements, Nodes, Paths, Edges, landmarks and Districts. What happens when we try to record the same data in an Indian context? What happens when we apply the same theory of Imageability to a complex shifting urban pattern of the Indian cities and how can we as Urban Designers demonstrate our role in the image building ordeal of these cities? The organizational patterns formed through mental images, of an Indian city is often diverse and intangible. It is also multi layered and temporary in terms of the spirit of the place. The pattern of images formed is loaded with associative meaning and intrinsically linked with the history and socio-cultural dominance of the place. The embedded memory of a place in one’s mind often plays an even more important role while formulating these images. Thus while deriving an image of a city one is often confused or finds the result chaotic. The images formed due to its complexity are further difficult to represent using a single medium. Under such a scenario it’s difficult to derive an output of an image constructed as well as make design interventions to enhance the legibility of a place. However, there can be a combination of tools and methods that allows one to record the key elements of a place through time, space and one’s user interface with the place. There has to be a clear understanding of the participant groups of a place and their time and period of engagement with the place as well. How we can translate the result obtained into a design intervention at the end, is the main of the research. Could a multi-faceted cognitive mapping be an answer to this or could it be a very transient mapping method which can change over time, place and person. How does the context influence the process of image building in one’s mind? These are the key questions that this research will aim to answer.Keywords: imageability, organizational patterns, legibility, cognitive mapping
Procedia PDF Downloads 3122273 A Fast Parallel and Distributed Type-2 Fuzzy Algorithm Based on Cooperative Mobile Agents Model for High Performance Image Processing
Authors: Fatéma Zahra Benchara, Mohamed Youssfi, Omar Bouattane, Hassan Ouajji, Mohamed Ouadi Bensalah
Abstract:
The aim of this paper is to present a distributed implementation of the Type-2 Fuzzy algorithm in a parallel and distributed computing environment based on mobile agents. The proposed algorithm is assigned to be implemented on a SPMD (Single Program Multiple Data) architecture which is based on cooperative mobile agents as AVPE (Agent Virtual Processing Element) model in order to improve the processing resources needed for performing the big data image segmentation. In this work we focused on the application of this algorithm in order to process the big data MRI (Magnetic Resonance Images) image of size (n x m). It is encapsulated on the Mobile agent team leader in order to be split into (m x n) pixels one per AVPE. Each AVPE perform and exchange the segmentation results and maintain asynchronous communication with their team leader until the convergence of this algorithm. Some interesting experimental results are obtained in terms of accuracy and efficiency analysis of the proposed implementation, thanks to the mobile agents several interesting skills introduced in this distributed computational model.Keywords: distributed type-2 fuzzy algorithm, image processing, mobile agents, parallel and distributed computing
Procedia PDF Downloads 4262272 Potassium-Phosphorus-Nitrogen Detection and Spectral Segmentation Analysis Using Polarized Hyperspectral Imagery and Machine Learning
Authors: Nicholas V. Scott, Jack McCarthy
Abstract:
Military, law enforcement, and counter terrorism organizations are often tasked with target detection and image characterization of scenes containing explosive materials in various types of environments where light scattering intensity is high. Mitigation of this photonic noise using classical digital filtration and signal processing can be difficult. This is partially due to the lack of robust image processing methods for photonic noise removal, which strongly influence high resolution target detection and machine learning-based pattern recognition. Such analysis is crucial to the delivery of reliable intelligence. Polarization filters are a possible method for ambient glare reduction by allowing only certain modes of the electromagnetic field to be captured, providing strong scene contrast. An experiment was carried out utilizing a polarization lens attached to a hyperspectral imagery camera for the purpose of exploring the degree to which an imaged polarized scene of potassium, phosphorus, and nitrogen mixture allows for improved target detection and image segmentation. Preliminary imagery results based on the application of machine learning algorithms, including competitive leaky learning and distance metric analysis, to polarized hyperspectral imagery, suggest that polarization filters provide a slight advantage in image segmentation. The results of this work have implications for understanding the presence of explosive material in dry, desert areas where reflective glare is a significant impediment to scene characterization.Keywords: explosive material, hyperspectral imagery, image segmentation, machine learning, polarization
Procedia PDF Downloads 1372271 Evaluation of Robust Feature Descriptors for Texture Classification
Authors: Jia-Hong Lee, Mei-Yi Wu, Hsien-Tsung Kuo
Abstract:
Texture is an important characteristic in real and synthetic scenes. Texture analysis plays a critical role in inspecting surfaces and provides important techniques in a variety of applications. Although several descriptors have been presented to extract texture features, the development of object recognition is still a difficult task due to the complex aspects of texture. Recently, many robust and scaling-invariant image features such as SIFT, SURF and ORB have been successfully used in image retrieval and object recognition. In this paper, we have tried to compare the performance for texture classification using these feature descriptors with k-means clustering. Different classifiers including K-NN, Naive Bayes, Back Propagation Neural Network , Decision Tree and Kstar were applied in three texture image sets - UIUCTex, KTH-TIPS and Brodatz, respectively. Experimental results reveal SIFTS as the best average accuracy rate holder in UIUCTex, KTH-TIPS and SURF is advantaged in Brodatz texture set. BP neuro network works best in the test set classification among all used classifiers.Keywords: texture classification, texture descriptor, SIFT, SURF, ORB
Procedia PDF Downloads 3672270 Evaluation of Longitudinal Relaxation Time (T1) of Bone Marrow in Lumbar Vertebrae of Leukaemia Patients Undergoing Magnetic Resonance Imaging
Authors: M. G. R. S. Perera, B. S. Weerakoon, L. P. G. Sherminie, M. L. Jayatilake, R. D. Jayasinghe, W. Huang
Abstract:
The aim of this study was to measure and evaluate the Longitudinal Relaxation Times (T1) in bone marrow of an Acute Myeloid Leukaemia (AML) patient in order to explore the potential for a prognostic biomarker using Magnetic Resonance Imaging (MRI) which will be a non-invasive prognostic approach to AML. MR image data were collected in the DICOM format and MATLAB Simulink software was used in the image processing and data analysis. For quantitative MRI data analysis, Region of Interests (ROI) on multiple image slices were drawn encompassing vertebral bodies of L3, L4, and L5. T1 was evaluated using the T1 maps obtained. The estimated bone marrow mean value of T1 was 790.1 (ms) at 3T. However, the reported T1 value of healthy subjects is significantly (946.0 ms) higher than the present finding. This suggests that the T1 for bone marrow can be considered as a potential prognostic biomarker for AML patients.Keywords: acute myeloid leukaemia, longitudinal relaxation time, magnetic resonance imaging, prognostic biomarker.
Procedia PDF Downloads 5292269 Unequal Error Protection of VQ Image Transmission System
Authors: Khelifi Mustapha, A. Moulay lakhdar, I. Elawady
Abstract:
We will study the unequal error protection for VQ image. We have used the Reed Solomon (RS) Codes as Channel coding because they offer better performance in terms of channel error correction over a binary output channel. One such channel (binary input and output) should be considered if it is the case of the application layer, because it includes all the features of the layers located below and on the what it is usually not feasible to make changes.Keywords: vector quantization, channel error correction, Reed-Solomon channel coding, application
Procedia PDF Downloads 3632268 Why and When to Teach Definitions: Necessary and Unnecessary Discontinuities Resulting from the Definition of Mathematical Concepts
Authors: Josephine Shamash, Stuart Smith
Abstract:
We examine reasons for introducing definitions in teaching mathematics in a number of different cases. We try to determine if, where, and when to provide a definition, and which definition to choose. We characterize different types of definitions and the different purposes we may have for formulating them, and detail examples of each type. Giving a definition at a certain stage can sometimes be detrimental to the development of the concept image. In such a case, it is advisable to delay the precise definition to a later stage. We describe two models, the 'successive approximation model', and the 'model of the extending definition' that fit such situations. Detailed examples that fit the different models are given based on material taken from a number of textbooks, and analysis of the way the concept is introduced, and where and how its definition is given. Our conclusions, based on this analysis, is that some of the definitions given may cause discontinuities in the learning sequence and constitute obstacles and unnecessary cognitive conflicts in the formation of the concept definition. However, in other cases, the discontinuity in passing from definition to definition actually serves a didactic purpose, is unavoidable for the mathematical evolution of the concept image, and is essential for students to deepen their understanding.Keywords: concept image, mathematical definitions, mathematics education, mathematics teaching
Procedia PDF Downloads 1282267 Using Computer Vision to Detect and Localize Fractures in Wrist X-ray Images
Authors: John Paul Q. Tomas, Mark Wilson L. de los Reyes, Kirsten Joyce P. Vasquez
Abstract:
The most frequent type of fracture is a wrist fracture, which often makes it difficult for medical professionals to find and locate. In this study, fractures in wrist x-ray pictures were located and identified using deep learning and computer vision. The researchers used image filtering, masking, morphological operations, and data augmentation for the image preprocessing and trained the RetinaNet and Faster R-CNN models with ResNet50 backbones and Adam optimizers separately for each image filtering technique and projection. The RetinaNet model with Anisotropic Diffusion Smoothing filter trained with 50 epochs has obtained the greatest accuracy of 99.14%, precision of 100%, sensitivity/recall of 98.41%, specificity of 100%, and an IoU score of 56.44% for the Posteroanterior projection utilizing augmented data. For the Lateral projection using augmented data, the RetinaNet model with an Anisotropic Diffusion filter trained with 50 epochs has produced the highest accuracy of 98.40%, precision of 98.36%, sensitivity/recall of 98.36%, specificity of 98.43%, and an IoU score of 58.69%. When comparing the test results of the different individual projections, models, and image filtering techniques, the Anisotropic Diffusion filter trained with 50 epochs has produced the best classification and regression scores for both projections.Keywords: Artificial Intelligence, Computer Vision, Wrist Fracture, Deep Learning
Procedia PDF Downloads 722266 Remote Sensing through Deep Neural Networks for Satellite Image Classification
Authors: Teja Sai Puligadda
Abstract:
Satellite images in detail can serve an important role in the geographic study. Quantitative and qualitative information provided by the satellite and remote sensing images minimizes the complexity of work and time. Data/images are captured at regular intervals by satellite remote sensing systems, and the amount of data collected is often enormous, and it expands rapidly as technology develops. Interpreting remote sensing images, geographic data mining, and researching distinct vegetation types such as agricultural and forests are all part of satellite image categorization. One of the biggest challenge data scientists faces while classifying satellite images is finding the best suitable classification algorithms based on the available that could able to classify images with utmost accuracy. In order to categorize satellite images, which is difficult due to the sheer volume of data, many academics are turning to deep learning machine algorithms. As, the CNN algorithm gives high accuracy in image recognition problems and automatically detects the important features without any human supervision and the ANN algorithm stores information on the entire network (Abhishek Gupta., 2020), these two deep learning algorithms have been used for satellite image classification. This project focuses on remote sensing through Deep Neural Networks i.e., ANN and CNN with Deep Sat (SAT-4) Airborne dataset for classifying images. Thus, in this project of classifying satellite images, the algorithms ANN and CNN are implemented, evaluated & compared and the performance is analyzed through evaluation metrics such as Accuracy and Loss. Additionally, the Neural Network algorithm which gives the lowest bias and lowest variance in solving multi-class satellite image classification is analyzed.Keywords: artificial neural network, convolutional neural network, remote sensing, accuracy, loss
Procedia PDF Downloads 1582265 Optimized Approach for Secure Data Sharing in Distributed Database
Authors: Ahmed Mateen, Zhu Qingsheng, Ahmad Bilal
Abstract:
In the current age of technology, information is the most precious asset of a company. Today, companies have a large amount of data. As the data become larger, access to data for some particular information is becoming slower day by day. Faster data processing to shape it in the form of information is the biggest issue. The major problems in distributed databases are the efficiency of data distribution and response time of data distribution. The security of data distribution is also a big issue. For these problems, we proposed a strategy that can maximize the efficiency of data distribution and also increase its response time. This technique gives better results for secure data distribution from multiple heterogeneous sources. The newly proposed technique facilitates the companies for secure data sharing efficiently and quickly.Keywords: ER-schema, electronic record, P2P framework, API, query formulation
Procedia PDF Downloads 3312264 Secret Sharing in Visual Cryptography Using NVSS and Data Hiding Techniques
Authors: Misha Alexander, S. B. Waykar
Abstract:
Visual Cryptography is a special unbreakable encryption technique that transforms the secret image into random noisy pixels. These shares are transmitted over the network and because of its noisy texture it attracts the hackers. To address this issue a Natural Visual Secret Sharing Scheme (NVSS) was introduced that uses natural shares either in digital or printed form to generate the noisy secret share. This scheme greatly reduces the transmission risk but causes distortion in the retrieved secret image through variation in settings and properties of digital devices used to capture the natural image during encryption / decryption phase. This paper proposes a new NVSS scheme that extracts the secret key from randomly selected unaltered multiple natural images. To further improve the security of the shares data hiding techniques such as Steganography and Alpha channel watermarking are proposed.Keywords: decryption, encryption, natural visual secret sharing, natural images, noisy share, pixel swapping
Procedia PDF Downloads 4032263 Numerical Implementation and Testing of Fractioning Estimator Method for the Box-Counting Dimension of Fractal Objects
Authors: Abraham Terán Salcedo, Didier Samayoa Ochoa
Abstract:
This work presents a numerical implementation of a method for estimating the box-counting dimension of self-avoiding curves on a planar space, fractal objects captured on digital images; this method is named fractioning estimator. Classical methods of digital image processing, such as noise filtering, contrast manipulation, and thresholding, among others, are used in order to obtain binary images that are suitable for performing the necessary computations of the fractioning estimator. A user interface is developed for performing the image processing operations and testing the fractioning estimator on different captured images of real-life fractal objects. To analyze the results, the estimations obtained through the fractioning estimator are compared to the results obtained through other methods that are already implemented on different available software for computing and estimating the box-counting dimension.Keywords: box-counting, digital image processing, fractal dimension, numerical method
Procedia PDF Downloads 812262 A Combination of Anisotropic Diffusion and Sobel Operator to Enhance the Performance of the Morphological Component Analysis for Automatic Crack Detection
Authors: Ankur Dixit, Hiroaki Wagatsuma
Abstract:
The crack detection on a concrete bridge is an important and constant task in civil engineering. Chronically, humans are checking the bridge for inspection of cracks to maintain the quality and reliability of bridge. But this process is very long and costly. To overcome such limitations, we have used a drone with a digital camera, which took some images of bridge deck and these images are processed by morphological component analysis (MCA). MCA technique is a very strong application of sparse coding and it explores the possibility of separation of images. In this paper, MCA has been used to decompose the image into coarse and fine components with the effectiveness of two dictionaries namely anisotropic diffusion and wavelet transform. An anisotropic diffusion is an adaptive smoothing process used to adjust diffusion coefficient by finding gray level and gradient as features. These cracks in image are enhanced by subtracting the diffused coarse image into the original image and the results are treated by Sobel edge detector and binary filtering to exhibit the cracks in a fine way. Our results demonstrated that proposed MCA framework using anisotropic diffusion followed by Sobel operator and binary filtering may contribute to an automation of crack detection even in open field sever conditions such as bridge decks.Keywords: anisotropic diffusion, coarse component, fine component, MCA, Sobel edge detector and wavelet transform
Procedia PDF Downloads 1722261 A Primary Care Diagnosis of Middle-Aged Men with Oral Cancer Who Underwent Extensive Resection and Flap Repair: A Case Report
Authors: Ching-Yi Huang, Pi-Fen Cheng, Hui-Zhu Chen, Shi Ting Huang, Heng-Hua Wang
Abstract:
This is a case of oral cancer after extensive resection and modified right lateral neck lymph node dissection followed by reconstruction with a skin flap. The nursing period lasted From September 25 to October 3, 2017, through observation, interview, physical assessment, and medical record review, the author identified the following nursing problems: acute pain, impaired oral mucous membrane, and body image change. During the nursing period, the author provided individual and overall nursing care and established mutual trust through the use of empathy. Author listened and eased the patient's physical indisposition, such as wound pain, we use medications and acupuncture massage to relieve pain. However, for oral mucosa change caused by surgery, provide continuous and complete oral care and oral exercise training to improve oral mucosal healing and restore swallowing function. In the body-image changes, guided him to express his feeling after the body-image change, and enhanced support and from the family, and encouraged him to attend head and neck cancer survivor alliance which allowed the patient to accept the altered body image and reaffirm self-worth. Hopefully, through sharing this nursing experience will help to the nursing care quality of nursing care for oral cancer patients after extensive resection and modified right lateral neck lymph node dissection followed by reconstruction with a skin flap.Keywords: oral cancer, acute pain, impaired oral mucous membrane, body image change
Procedia PDF Downloads 1852260 A Schema of Building an Efficient Quality Gate throughout the Software Development with Tools
Authors: Le Chen
Abstract:
This paper presents an efficient tool platform scheme to ensure quality protection throughout the software development process. The main principle is to manage the information of requirements, design, development, testing, operation and maintenance process with proper tools, and to set up the quality standards of each process. Through the tools’ display and summary of quality standards, the quality standards can be visualizad and ready for policy decision, which is called Quality Gate in this paper. In addition, the tools are also integrated to achieve the exchange and relation of information which highly improving operational efficiency. In this paper, the feasibility of the scheme is verified by practical application of development projects, and the overall information display and data mining are proposed to be further improved.Keywords: efficiency, quality gate, software process, tools
Procedia PDF Downloads 3562259 Improved Processing Speed for Text Watermarking Algorithm in Color Images
Authors: Hamza A. Al-Sewadi, Akram N. A. Aldakari
Abstract:
Copyright protection and ownership proof of digital multimedia are achieved nowadays by digital watermarking techniques. A text watermarking algorithm for protecting the property rights and ownership judgment of color images is proposed in this paper. Embedding is achieved by inserting texts elements randomly into the color image as noise. The YIQ image processing model is found to be faster than other image processing methods, and hence, it is adopted for the embedding process. An optional choice of encrypting the text watermark before embedding is also suggested (in case required by some applications), where, the text can is encrypted using any enciphering technique adding more difficulty to hackers. Experiments resulted in embedding speed improvement of more than double the speed of other considered systems (such as least significant bit method, and separate color code methods), and a fairly acceptable level of peak signal to noise ratio (PSNR) with low mean square error values for watermarking purposes.Keywords: steganography, watermarking, time complexity measurements, private keys
Procedia PDF Downloads 1422258 Automatic Early Breast Cancer Segmentation Enhancement by Image Analysis and Hough Transform
Authors: David Jurado, Carlos Ávila
Abstract:
Detection of early signs of breast cancer development is crucial to quickly diagnose the disease and to define adequate treatment to increase the survival probability of the patient. Computer Aided Detection systems (CADs), along with modern data techniques such as Machine Learning (ML) and Neural Networks (NN), have shown an overall improvement in digital mammography cancer diagnosis, reducing the false positive and false negative rates becoming important tools for the diagnostic evaluations performed by specialized radiologists. However, ML and NN-based algorithms rely on datasets that might bring issues to the segmentation tasks. In the present work, an automatic segmentation and detection algorithm is described. This algorithm uses image processing techniques along with the Hough transform to automatically identify microcalcifications that are highly correlated with breast cancer development in the early stages. Along with image processing, automatic segmentation of high-contrast objects is done using edge extraction and circle Hough transform. This provides the geometrical features needed for an automatic mask design which extracts statistical features of the regions of interest. The results shown in this study prove the potential of this tool for further diagnostics and classification of mammographic images due to the low sensitivity to noisy images and low contrast mammographies.Keywords: breast cancer, segmentation, X-ray imaging, hough transform, image analysis
Procedia PDF Downloads 822257 Impact of Green Marketing Mix Strategy and CSR on Organizational Performance: An Empirical Study of Manufacturing Sector of Pakistan
Authors: Syeda Shawana Mahasan, Muhammad Farooq Akhtar
Abstract:
The objective of this study is to analyze the influence of the green marketing mix strategy and corporate social responsibility (CSR) on the performance of an organization, taking into account the mediating effect of corporate image. The impact of frugal innovation and corporate activism is being examined. The data was gathered from executives at various levels of management, including top, middle, and lower-level managers, from a total of 550 manufacturing enterprises of different sizes, ranging from small to medium to large. The collected replies are processed and analyzed using SMART PLS version 4.0.0.0. The application of PLS-SEM demonstrates that the green marketing mix strategy and corporate social responsibility have a significant impact on organizational performance. Therefore, it is imperative for organizations to effectively adopt environmentally sustainable and socially conscious methods within their operations. The results indicate that the corporate image has a key role in mediating the relationship between the green marketing mix strategy, corporate social responsibility, and organizational performance. This demonstrates the imperative for organizations to actively enhance their favorable reputation among stakeholders. The combination of frugal innovation and corporate activism enhances the connection between corporate image and organizational performance. The current study assists managers in recognizing the significance of these particular constructs in maintaining the long-term performance of the organization.Keywords: green marketing mix strategy, CSR, corporate image, organizational performance, frugal innovation, corporate activism
Procedia PDF Downloads 382256 MRI Quality Control Using Texture Analysis and Spatial Metrics
Authors: Kumar Kanudkuri, A. Sandhya
Abstract:
Typically, in a MRI clinical setting, there are several protocols run, each indicated for a specific anatomy and disease condition. However, these protocols or parameters within them can change over time due to changes to the recommendations by the physician groups or updates in the software or by the availability of new technologies. Most of the time, the changes are performed by the MRI technologist to account for either time, coverage, physiological, or Specific Absorbtion Rate (SAR ) reasons. However, giving properly guidelines to MRI technologist is important so that they do not change the parameters that negatively impact the image quality. Typically a standard American College of Radiology (ACR) MRI phantom is used for Quality Control (QC) in order to guarantee that the primary objectives of MRI are met. The visual evaluation of quality depends on the operator/reviewer and might change amongst operators as well as for the same operator at various times. Therefore, overcoming these constraints is essential for a more impartial evaluation of quality. This makes quantitative estimation of image quality (IQ) metrics for MRI quality control is very important. So in order to solve this problem, we proposed that there is a need for a robust, open-source, and automated MRI image control tool. The Designed and developed an automatic analysis tool for measuring MRI image quality (IQ) metrics like Signal to Noise Ratio (SNR), Signal to Noise Ratio Uniformity (SNRU), Visual Information Fidelity (VIF), Feature Similarity (FSIM), Gray level co-occurrence matrix (GLCM), slice thickness accuracy, slice position accuracy, High contrast spatial resolution) provided good accuracy assessment. A standardized quality report has generated that incorporates metrics that impact diagnostic quality.Keywords: ACR MRI phantom, MRI image quality metrics, SNRU, VIF, FSIM, GLCM, slice thickness accuracy, slice position accuracy
Procedia PDF Downloads 168