Search results for: image clustering
2570 A Context-Sensitive Algorithm for Media Similarity Search
Authors: Guang-Ho Cha
Abstract:
This paper presents a context-sensitive media similarity search algorithm. One of the central problems regarding media search is the semantic gap between the low-level features computed automatically from media data and the human interpretation of them. This is because the notion of similarity is usually based on high-level abstraction but the low-level features do not sometimes reflect the human perception. Many media search algorithms have used the Minkowski metric to measure similarity between image pairs. However those functions cannot adequately capture the aspects of the characteristics of the human visual system as well as the nonlinear relationships in contextual information given by images in a collection. Our search algorithm tackles this problem by employing a similarity measure and a ranking strategy that reflect the nonlinearity of human perception and contextual information in a dataset. Similarity search in an image database based on this contextual information shows encouraging experimental results.Keywords: context-sensitive search, image search, similarity ranking, similarity search
Procedia PDF Downloads 3632569 Segmentation of Gray Scale Images of Dropwise Condensation on Textured Surfaces
Authors: Helene Martin, Solmaz Boroomandi Barati, Jean-Charles Pinoli, Stephane Valette, Yann Gavet
Abstract:
In the present work we developed an image processing algorithm to measure water droplets characteristics during dropwise condensation on pillared surfaces. The main problem in this process is the similarity between shape and size of water droplets and the pillars. The developed method divides droplets into four main groups based on their size and applies the corresponding algorithm to segment each group. These algorithms generate binary images of droplets based on both their geometrical and intensity properties. The information related to droplets evolution during time including mean radius and drops number per unit area are then extracted from the binary images. The developed image processing algorithm is verified using manual detection and applied to two different sets of images corresponding to two kinds of pillared surfaces.Keywords: dropwise condensation, textured surface, image processing, watershed
Procedia PDF Downloads 2212568 Cause-Related Marketing: A Review of the Literature
Authors: Chang Hung Chen
Abstract:
Typically the Cause-Related Marketing (CRM) is effective for promoting products, and is also accepted as a role of communication tool for creating a positive image of the corporate. Today, companies are taking Corporate Social Responsibility (CSR) as core activities to build a goal of sustainable development. CRM is not a synonym of CSR. Actually, CRM is a part of CSR, or a type of marketing strategy in CSR framework. This article focuses on the relationship between CSR and CRM, and how the CRM improves the CSR performance of the corporate. The research was conducted through review of literature on the subject area.Keywords: cause-related marketing, corporate social responsibility, corporate image, consumer behavior
Procedia PDF Downloads 3472567 Image Processing on Geosynthetic Reinforced Layers to Evaluate Shear Strength and Variations of the Strain Profiles
Authors: S. K. Khosrowshahi, E. Güler
Abstract:
This study investigates the reinforcement function of geosynthetics on the shear strength and strain profile of sand. Conducting a series of simple shear tests, the shearing behavior of the samples under static and cyclic loads was evaluated. Three different types of geosynthetics including geotextile and geonets were used as the reinforcement materials. An image processing analysis based on the optical flow method was performed to measure the lateral displacements and estimate the shear strains. It is shown that besides improving the shear strength, the geosynthetic reinforcement leads a remarkable reduction on the shear strains. The improved layer reduces the required thickness of the soil layer to resist against shear stresses. Consequently, the geosynthetic reinforcement can be considered as a proper approach for the sustainable designs, especially in the projects with huge amount of geotechnical applications like subgrade of the pavements, roadways, and railways.Keywords: image processing, soil reinforcement, geosynthetics, simple shear test, shear strain profile
Procedia PDF Downloads 2182566 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission
Authors: Tingwei Shu, Dong Zhou, Chengjun Guo
Abstract:
Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.Keywords: semantic communication, transformer, wavelet transform, data processing
Procedia PDF Downloads 772565 A Novel Probabilistic Spatial Locality of Reference Technique for Automatic Cleansing of Digital Maps
Authors: A. Abdullah, S. Abushalmat, A. Bakshwain, A. Basuhail, A. Aslam
Abstract:
GIS (Geographic Information System) applications require geo-referenced data, this data could be available as databases or in the form of digital or hard-copy agro-meteorological maps. These parameter maps are color-coded with different regions corresponding to different parameter values, converting these maps into a database is not very difficult. However, text and different planimetric elements overlaid on these maps makes an accurate image to database conversion a challenging problem. The reason being, it is almost impossible to exactly replace what was underneath the text or icons; thus, pointing to the need for inpainting. In this paper, we propose a probabilistic inpainting approach that uses the probability of spatial locality of colors in the map for replacing overlaid elements with underlying color. We tested the limits of our proposed technique using non-textual simulated data and compared text removing results with a popular image editing tool using public domain data with promising results.Keywords: noise, image, GIS, digital map, inpainting
Procedia PDF Downloads 3512564 Tree-Based Inference for Regionalization: A Comparative Study of Global Topological Perturbation Methods
Authors: Orhun Aydin, Mark V. Janikas, Rodrigo Alves, Renato Assuncao
Abstract:
In this paper, a tree-based perturbation methodology for regionalization inference is presented. Regionalization is a constrained optimization problem that aims to create groups with similar attributes while satisfying spatial contiguity constraints. Similar to any constrained optimization problem, the spatial constraint may hinder convergence to some global minima, resulting in spatially contiguous members of a group with dissimilar attributes. This paper presents a general methodology for rigorously perturbing spatial constraints through the use of random spanning trees. The general framework presented can be used to quantify the effect of the spatial constraints in the overall regionalization result. We compare several types of stochastic spanning trees used in inference problems such as fuzzy regionalization and determining the number of regions. Performance of stochastic spanning trees is juxtaposed against the traditional permutation-based hypothesis testing frequently used in spatial statistics. Inference results for fuzzy regionalization and determining the number of regions is presented on the Local Area Personal Incomes for Texas Counties provided by the Bureau of Economic Analysis.Keywords: regionalization, constrained clustering, probabilistic inference, fuzzy clustering
Procedia PDF Downloads 2282563 Noise Removal Techniques in Medical Images
Authors: Amhimmid Mohammed Saffour, Abdelkader Salama
Abstract:
Filtering is a part of image enhancement techniques, it is used to enhance certain details such as edges in the image that are relevant to the application. Additionally, filtering can even be used to eliminate unwanted components of noise. Medical images typically contain salt and pepper noise and Poisson noise. This noise appears to the presence of minute grey scale variations within the image. In this paper, different filters techniques namely (Median, Wiener, Rank order3, Rank order5, and Average) were applied on CT medical images (Brain and chest). We using all these filters to remove salt and pepper noise from these images. This type of noise consists of random pixels being set to black or white. Peak Signal to Noise Ratio (PSNR), Mean Square Error r(MSE) and Histogram were used to evaluated the quality of filtered images. The results, which we have achieved shows that, these filters, are more useful and they prove to be helpful for general medical practitioners to analyze the symptoms of the patients with no difficulty.Keywords: CT imaging, median filter, adaptive filter and average filter, MATLAB
Procedia PDF Downloads 3122562 Image Reconstruction Method Based on L0 Norm
Authors: Jianhong Xiang, Hao Xiang, Linyu Wang
Abstract:
Compressed sensing (CS) has a wide range of applications in sparse signal reconstruction. Aiming at the problems of low recovery accuracy and long reconstruction time of existing reconstruction algorithms in medical imaging, this paper proposes a corrected smoothing L0 algorithm based on compressed sensing (CSL0). First, an approximate hyperbolic tangent function (AHTF) that is more similar to the L0 norm is proposed to approximate the L0 norm. Secondly, in view of the "sawtooth phenomenon" in the steepest descent method and the problem of sensitivity to the initial value selection in the modified Newton method, the use of the steepest descent method and the modified Newton method are jointly optimized to improve the reconstruction accuracy. Finally, the CSL0 algorithm is simulated on various images. The results show that the algorithm proposed in this paper improves the reconstruction accuracy of the test image by 0-0. 98dB.Keywords: smoothed L0, compressed sensing, image processing, sparse reconstruction
Procedia PDF Downloads 1132561 A t-SNE and UMAP Based Neural Network Image Classification Algorithm
Authors: Shelby Simpson, William Stanley, Namir Naba, Xiaodi Wang
Abstract:
Both t-SNE and UMAP are brand new state of art tools to predominantly preserve the local structure that is to group neighboring data points together, which indeed provides a very informative visualization of heterogeneity in our data. In this research, we develop a t-SNE and UMAP base neural network image classification algorithm to embed the original dataset to a corresponding low dimensional dataset as a preprocessing step, then use this embedded database as input to our specially designed neural network classifier for image classification. We use the fashion MNIST data set, which is a labeled data set of images of clothing objects in our experiments. t-SNE and UMAP are used for dimensionality reduction of the data set and thus produce low dimensional embeddings. Furthermore, we use the embeddings from t-SNE and UMAP to feed into two neural networks. The accuracy of the models from the two neural networks is then compared to a dense neural network that does not use embedding as an input to show which model can classify the images of clothing objects more accurately.Keywords: t-SNE, UMAP, fashion MNIST, neural networks
Procedia PDF Downloads 1972560 A Neural Network Classifier for Estimation of the Degree of Infestation by Late Blight on Tomato Leaves
Authors: Gizelle K. Vianna, Gabriel V. Cunha, Gustavo S. Oliveira
Abstract:
Foliage diseases in plants can cause a reduction in both quality and quantity of agricultural production. Intelligent detection of plant diseases is an essential research topic as it may help monitoring large fields of crops by automatically detecting the symptoms of foliage diseases. This work investigates ways to recognize the late blight disease from the analysis of tomato digital images, collected directly from the field. A pair of multilayer perceptron neural network analyzes the digital images, using data from both RGB and HSL color models, and classifies each image pixel. One neural network is responsible for the identification of healthy regions of the tomato leaf, while the other identifies the injured regions. The outputs of both networks are combined to generate the final classification of each pixel from the image and the pixel classes are used to repaint the original tomato images by using a color representation that highlights the injuries on the plant. The new images will have only green, red or black pixels, if they came from healthy or injured portions of the leaf, or from the background of the image, respectively. The system presented an accuracy of 97% in detection and estimation of the level of damage on the tomato leaves caused by late blight.Keywords: artificial neural networks, digital image processing, pattern recognition, phytosanitary
Procedia PDF Downloads 3262559 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Authors: Elham Bagheri, Yalda Mohsenzadeh
Abstract:
Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception
Procedia PDF Downloads 882558 Brand Equity Tourism Destinations: An Application in Wine Regions Comparing Visitors' and Managers' Perspectives
Abstract:
The concept of brand equity in the wine tourism area is an interesting topic to explore the factors that determine it. The aim of this study is to address this gap by investigating wine tourism destinations brand equity, and understanding the impact that the denomination of origin (DO) brand image and the destination image have on brand equity. Managing and monitoring the branding of wine tourism destinations is crucial to attract tourist arrivals. The multiplicity of stakeholders involved in the branding process calls for research that, unlike previous studies, adopts a broader perspective and incorporates an internal and an external perspective. Therefore, this gap by comparing managers’ and visitors’ approaches to wine tourism destination brand equity has been addressed. A survey questionnaire for data collection purposes was used. The hypotheses were tested using winery managers and winery visitors, each leading a different position relative to the wine tourism destination brand equity. All the interviews were conducted face-to-face. The survey instrument included several scales related to DO brand image, destination image, and wine tourism destination brand equity. All items were measured on seven-point Likert scales. Partial least squares was used to analyze the accuracy of scales, the structural model, and multi-group analysis to identify the differences in the path coefficients and to test the hypotheses. The results show that the positive influence of DO brand image on wine tourism destination brand equity is stronger for wineries than for visitors, but there are no significant differences between the two groups. However, there are significant differences in the positive effect of destination brand image on both wine tourism destination brand equity and DO brand image. The results of this study are important for consultants, practitioners, and policy makers. The gap between managers and visitors calls for the development of a number of campaigns to enhance the image that visitors hold and, thus, increase tourist arrivals. Events such as wine gatherings and gastronomic symposiums held at universities and culinary schools and participation in business meetings can enhance the perceptions and in turn, the added value, brand equity of the wine tourism destinations. The images of destinations and DOs can help strengthen the brand equity of the wine tourism destinations, especially for visitors. Thus, the development and reinforcement of favorable, strong, and unique destination associations and DO associations are important to increase that value. Joint campaigns are advisable to enhance the images of destinations and DOs and, as a consequence, the value of the wine tourism destination brand.Keywords: brand equity, managers, visitors, wine tourism
Procedia PDF Downloads 1332557 Computational Cell Segmentation in Immunohistochemically Image of Meningioma Tumor Using Fuzzy C-Means and Adaptive Vector Directional Filter
Authors: Vahid Anari, Leila Shahmohammadi
Abstract:
Diagnosing and interpreting manually from a large cohort dataset of immunohistochemically stained tissue of tumors using an optical microscope involves subjectivity and also is tedious for pathologist specialists. Moreover, digital pathology today represents more of an evolution than a revolution in pathology. In this paper, we develop and test an unsupervised algorithm that can automatically enhance the IHC image of a meningioma tumor and classify cells into positive (proliferative) and negative (normal) cells. A dataset including 150 images is used to test the scheme. In addition, a new adaptive color image enhancement method is proposed based on a vector directional filter (VDF) and statistical properties of filtering the window. Since the cells are distinguishable by the human eye, the accuracy and stability of the algorithm are quantitatively compared through application to a wide variety of real images.Keywords: digital pathology, cell segmentation, immunohistochemically, noise reduction
Procedia PDF Downloads 652556 Medical Images Enhancement Using New Dynamic Band Pass Filter
Authors: Abdellatif Baba
Abstract:
In order to facilitate medical images analysis by improving their quality and readability, we present in this paper a new dynamic band pass filter as a general and suitable operator for different types of medical images. Our objective is to enrich the details of any treated medical image to make it sufficiently clear enough to give an understood and simplified meaning even for unspecialized people in the medical domain.Keywords: medical image enhancement, dynamic band pass filter, analysis improvement
Procedia PDF Downloads 2872555 Optoelectronic Hardware Architecture for Recurrent Learning Algorithm in Image Processing
Authors: Abdullah Bal, Sevdenur Bal
Abstract:
This paper purposes a new type of hardware application for training of cellular neural networks (CNN) using optical joint transform correlation (JTC) architecture for image feature extraction. CNNs require much more computation during the training stage compare to test process. Since optoelectronic hardware applications offer possibility of parallel high speed processing capability for 2D data processing applications, CNN training algorithm can be realized using Fourier optics technique. JTC employs lens and CCD cameras with laser beam that realize 2D matrix multiplication and summation in the light speed. Therefore, in the each iteration of training, JTC carries more computation burden inherently and the rest of mathematical computation realized digitally. The bipolar data is encoded by phase and summation of correlation operations is realized using multi-object input joint images. Overlapping properties of JTC are then utilized for summation of two cross-correlations which provide less computation possibility for training stage. Phase-only JTC does not require data rearrangement, electronic pre-calculation and strict system alignment. The proposed system can be incorporated simultaneously with various optical image processing or optical pattern recognition techniques just in the same optical system.Keywords: CNN training, image processing, joint transform correlation, optoelectronic hardware
Procedia PDF Downloads 5062554 Resistance and Sub-Resistances of RC Beams Subjected to Multiple Failure Modes
Authors: F. Sangiorgio, J. Silfwerbrand, G. Mancini
Abstract:
Geometric and mechanical properties all influence the resistance of RC structures and may, in certain combination of property values, increase the risk of a brittle failure of the whole system. This paper presents a statistical and probabilistic investigation on the resistance of RC beams designed according to Eurocodes 2 and 8, and subjected to multiple failure modes, under both the natural variation of material properties and the uncertainty associated with cross-section and transverse reinforcement geometry. A full probabilistic model based on JCSS Probabilistic Model Code is derived. Different beams are studied through material nonlinear analysis via Monte Carlo simulations. The resistance model is consistent with Eurocode 2. Both a multivariate statistical evaluation and the data clustering analysis of outcomes are then performed. Results show that the ultimate load behaviour of RC beams subjected to flexural and shear failure modes seems to be mainly influenced by the combination of the mechanical properties of both longitudinal reinforcement and stirrups, and the tensile strength of concrete, of which the latter appears to affect the overall response of the system in a nonlinear way. The model uncertainty of the resistance model used in the analysis plays undoubtedly an important role in interpreting results.Keywords: modelling, Monte Carlo simulations, probabilistic models, data clustering, reinforced concrete members, structural design
Procedia PDF Downloads 4712553 Development of an Image-Based Biomechanical Model for Assessment of Hip Fracture Risk
Authors: Masoud Nasiri Sarvi, Yunhua Luo
Abstract:
Low-trauma hip fracture, usually caused by fall from standing height, has become a main source of morbidity and mortality for the elderly. Factors affecting hip fracture include sex, race, age, body weight, height, body mass distribution, etc., and thus, hip fracture risk in fall differs widely from subject to subject. It is therefore necessary to develop a subject-specific biomechanical model to predict hip fracture risk. The objective of this study is to develop a two-level, image-based, subject-specific biomechanical model consisting of a whole-body dynamics model and a proximal-femur finite element (FE) model for more accurately assessing the risk of hip fracture in lateral falls. Required information for constructing the model is extracted from a whole-body and a hip DXA (Dual Energy X-ray Absorptiometry) image of the subject. The proposed model considers all parameters subject-specifically, which will provide a fast, accurate, and non-expensive method for predicting hip fracture risk.Keywords: bone mineral density, hip fracture risk, impact force, sideways falls
Procedia PDF Downloads 5342552 Exploiting JPEG2000 into Reversible Information
Authors: Te-Jen Chang, I-Hui Pan, Kuang-Hsiung Tan, Shan-Jen Cheng, Chien-Wu Lan, Chih-Chan Hu
Abstract:
With the event of multimedia age in order to protect data not to be tampered, damaged, and faked, information hiding technologies are proposed. Information hiding means important secret information is hidden into cover multimedia and then camouflaged media is produced. This camouflaged media has the characteristic of natural protection. Under the undoubted situation, important secret information is transmitted out.Reversible information hiding technologies for high capacity is proposed in this paper. The gray images are as cover media in this technology. We compress gray images and compare with the original image to produce the estimated differences. By using the estimated differences, expression information hiding is used, and higher information capacity can be achieved. According to experimental results, the proposed technology can be approved. For these experiments, the whole capacity of information payload and image quality can be satisfied.Keywords: cover media, camouflaged media, reversible information hiding, gray image
Procedia PDF Downloads 3262551 New Efficient Method for Coding Color Images
Authors: Walaa M.Abd-Elhafiez, Wajeb Gharibi
Abstract:
In this paper a novel color image compression technique for efficient storage and delivery of data is proposed. The proposed compression technique started by RGB to YCbCr color transformation process. Secondly, the canny edge detection method is used to classify the blocks into edge and non-edge blocks. Each color component Y, Cb, and Cr compressed by discrete cosine transform (DCT) process, quantizing and coding step by step using adaptive arithmetic coding. Our technique is concerned with the compression ratio, bits per pixel and peak signal to noise ratio, and produce better results than JPEG and more recent published schemes (like, CBDCT-CABS and MHC). The provided experimental results illustrate the proposed technique which is efficient and feasible in terms of compression ratio, bits per pixel and peak signal to noise ratio.Keywords: image compression, color image, q-coder, quantization, edge-detection
Procedia PDF Downloads 3272550 The Effect of Closed Circuit Television Image Patch Layout on Performance of a Simulated Train-Platform Departure Task
Authors: Aaron J. Small, Craig A. Fletcher
Abstract:
This study investigates the effect of closed circuit television (CCTV) image patch layout on performance of a simulated train-platform departure task. The within-subjects experimental design measures target detection rate and response latency during a CCTV visual search task conducted as part of the procedure for safe train dispatch. Three interface designs were developed by manipulating CCTV image patch layout. Eye movements, perceived workload and system usability were measured across experimental conditions. Task performance was compared to identify significant differences between conditions. The results of this study have not been determined.Keywords: rail human factors, workload, closed circuit television, platform departure, attention, information processing, interface design
Procedia PDF Downloads 1662549 Contradictive Representation of Women in Postfeminist Japanese Media
Authors: Emiko Suzuki
Abstract:
Although some claim that we are in a post-feminist society, the word “postfeminism” still raises questions to many. In postfeminist media, as a British sociologist Rosalind Gill points out, on the one hand, it seems to promote an empowering image of women who are active, positively sexually motivated, has free will to make market choices, and have surveillance and discipline for their personality and body, yet on the other hand, such beautiful and attractive feminist image imposes stronger surveillance of their mind and body for women. Similar representation, which is that femininity is described in a contradictive way, is seen in Japanese media as well. This study tries to capture how post-feminist Japanese media is, contrary to its ostensible messages, encouraging women to join the obedience to the labor system by affirming the traditional image of attractive women using sexual objectification and promoting values of neoliberalism. The result shows an interesting insight into how Japanese media is creating a conflicting ideal representation of women through repeatedly exposing such images.Keywords: postfeminism, Japanese media, sexual objectification, embodiment
Procedia PDF Downloads 1952548 The Mediating Role of Bank Image in Customer Satisfaction Building
Abstract:
The main objective of this research was to determine the dimensions of service quality in the banking industry of Iran. For this purpose, the study empirically examined the European perspective suggesting that service quality consists of three dimensions, technical, functional and image. This research is an applied research and its strategy is casual strategy. A standard questionnaire was used for collecting the data. 287 customers of Melli Bank of Northwest were selected through cluster sampling and were studied. The results from a banking service sample revealed that the overall service quality is influenced more by a consumer’s perception of technical quality than functional quality. Accordingly, the Gronroos model is a more appropriate representation of service quality than the American perspective with its limited concentration on the dimension of functional quality in the banking industry of Iran. So, knowing the key dimensions of the quality of services in this industry and planning for their improvement can increase the satisfaction of customers and productivity of this industry.Keywords: technical quality, functional quality, banking, image, mediating role
Procedia PDF Downloads 3682547 Narrating 1968: Felipe Cazals’ Canoa (1976) and Images of Massacre
Authors: Nancy Elizabeth Naranjo Garcia
Abstract:
Canoa (1976) by Felipe Cazals is a film that exposes the consequences of power that the Mexican State exercised over the 1968 student movement. The film, in this particular way, approaches the Tlatelolco Massacre from a point of view that takes into consideration the events that led up to it. Nonetheless, the reference to the political tension in Canoa remains ambiguous. Thus, the cinematographic representation refers to an event that leaves space for reflection, and as a consequence leaves evidence of an image that signals the notion of survival as Georges Didi-Huberman points out. In addition to denouncing the oppressive force by the Mexican State, the images in Canoa also emphasize what did not happen in Tlatelolco and its condensation with the student activists. To observe the images that Canoa offers in a new light, this work proposes further exploration with the following questions; How do the images in Canoa narrate? How are the images inserted in the film? In this fashion, a more profound comprehension of the objective and the essence of the images becomes feasible. As a result, it is possible to analyze the images of Canoa with the real killing at San Miguel Canoa in literature. The film visualizes a testimony of the event that once seemed unimaginable, an image that anticipates and structures the proceeding event. Therefore, this study takes a second look at how Canoa considers not only the killing at San Miguel Canoa and the Tlatlelolco Massacre, but goes further on contextualize an unimaginable image.Keywords: cinematographic representation, student movement, Tlatelolco Massacre, unimaginable image
Procedia PDF Downloads 2162546 Reliable Soup: Reliable-Driven Model Weight Fusion on Ultrasound Imaging Classification
Authors: Shuge Lei, Haonan Hu, Dasheng Sun, Huabin Zhang, Kehong Yuan, Jian Dai, Yan Tong
Abstract:
It remains challenging to measure reliability from classification results from different machine learning models. This paper proposes a reliable soup optimization algorithm based on the model weight fusion algorithm Model Soup, aiming to improve reliability by using dual-channel reliability as the objective function to fuse a series of weights in the breast ultrasound classification models. Experimental results on breast ultrasound clinical datasets demonstrate that reliable soup significantly enhances the reliability of breast ultrasound image classification tasks. The effectiveness of the proposed approach was verified via multicenter trials. The results from five centers indicate that the reliability optimization algorithm can enhance the reliability of the breast ultrasound image classification model and exhibit low multicenter correlation.Keywords: breast ultrasound image classification, feature attribution, reliability assessment, reliability optimization
Procedia PDF Downloads 832545 Iris Detection on RGB Image for Controlling Side Mirror
Authors: Norzalina Othman, Nurul Na’imy Wan, Azliza Mohd Rusli, Wan Noor Syahirah Meor Idris
Abstract:
Iris detection is a process where the position of the eyes is extracted from the face images. It is a current method used for many applications such as for security purpose and drowsiness detection. This paper proposes the use of eyes detection in controlling side mirror of motor vehicles. The eyes detection method aims to make driver easy to adjust the side mirrors automatically. The system will determine the midpoint coordinate of eyes detection on RGB (color) image and the input signal from y-coordinate will send it to controller in order to rotate the angle of side mirror on vehicle. The eye position was cropped and the coordinate of midpoint was successfully detected from the circle of iris detection using Viola Jones detection and circular Hough transform methods on RGB image. The coordinate of midpoint from the experiment are tested using controller to determine the angle of rotation on the side mirrors.Keywords: iris detection, midpoint coordinates, RGB images, side mirror
Procedia PDF Downloads 4212544 Fracture Crack Monitoring Using Digital Image Correlation Technique
Authors: B. G. Patel, A. K. Desai, S. G. Shah
Abstract:
The main of objective of this paper is to develop new measurement technique without touching the object. DIC is advance measurement technique use to measure displacement of particle with very high accuracy. This powerful innovative technique which is used to correlate two image segments to determine the similarity between them. For this study, nine geometrically similar beam specimens of different sizes with (steel fibers and glass fibers) and without fibers were tested under three-point bending in a closed loop servo-controlled machine with crack mouth opening displacement control with a rate of opening of 0.0005 mm/sec. Digital images were captured before loading (unreformed state) and at different instances of loading and were analyzed using correlation techniques to compute the surface displacements, crack opening and sliding displacements, load-point displacement, crack length and crack tip location. It was seen that the CMOD and vertical load-point displacement computed using DIC analysis matches well with those measured experimentally.Keywords: Digital Image Correlation, fibres, self compacting concrete, size effect
Procedia PDF Downloads 3872543 Clothes Identification Using Inception ResNet V2 and MobileNet V2
Authors: Subodh Chandra Shakya, Badal Shrestha, Suni Thapa, Ashutosh Chauhan, Saugat Adhikari
Abstract:
To tackle our problem of clothes identification, we used different architectures of Convolutional Neural Networks. Among different architectures, the outcome from Inception ResNet V2 and MobileNet V2 seemed promising. On comparison of the metrices, we observed that the Inception ResNet V2 slightly outperforms MobileNet V2 for this purpose. So this paper of ours proposes the cloth identifier using Inception ResNet V2 and also contains the comparison between the outcome of ResNet V2 and MobileNet V2. The document here contains the results and findings of the research that we performed on the DeepFashion Dataset. To improve the dataset, we used different image preprocessing techniques like image shearing, image rotation, and denoising. The whole experiment was conducted with the intention of testing the efficiency of convolutional neural networks on cloth identification so that we could develop a reliable system that is good enough in identifying the clothes worn by the users. The whole system can be integrated with some kind of recommendation system.Keywords: inception ResNet, convolutional neural net, deep learning, confusion matrix, data augmentation, data preprocessing
Procedia PDF Downloads 1862542 Embedded Hybrid Intuition: A Deep Learning and Fuzzy Logic Approach to Collective Creation and Computational Assisted Narratives
Authors: Roberto Cabezas H
Abstract:
The current work shows the methodology developed to create narrative lighting spaces for the multimedia performance piece 'cluster: the vanished paradise.' This empirical research is focused on exploring unconventional roles for machines in subjective creative processes, by delving into the semantics of data and machine intelligence algorithms in hybrid technological, creative contexts to expand epistemic domains trough human-machine cooperation. The creative process in scenic and performing arts is guided mostly by intuition; from that idea, we developed an approach to embed collective intuition in computational creative systems, by joining the properties of Generative Adversarial Networks (GAN’s) and Fuzzy Clustering based on a semi-supervised data creation and analysis pipeline. The model makes use of GAN’s to learn from phenomenological data (data generated from experience with lighting scenography) and algorithmic design data (augmented data by procedural design methods), fuzzy logic clustering is then applied to artificially created data from GAN’s to define narrative transitions built on membership index; this process allowed for the creation of simple and complex spaces with expressive capabilities based on position and light intensity as the parameters to guide the narrative. Hybridization comes not only from the human-machine symbiosis but also on the integration of different techniques for the implementation of the aided design system. Machine intelligence tools as proposed in this work are well suited to redefine collaborative creation by learning to express and expand a conglomerate of ideas and a wide range of opinions for the creation of sensory experiences. We found in GAN’s and Fuzzy Logic an ideal tool to develop new computational models based on interaction, learning, emotion and imagination to expand the traditional algorithmic model of computation.Keywords: fuzzy clustering, generative adversarial networks, human-machine cooperation, hybrid collective data, multimedia performance
Procedia PDF Downloads 1412541 Addressing the Exorbitant Cost of Labeling Medical Images with Active Learning
Authors: Saba Rahimi, Ozan Oktay, Javier Alvarez-Valle, Sujeeth Bharadwaj
Abstract:
Successful application of deep learning in medical image analysis necessitates unprecedented amounts of labeled training data. Unlike conventional 2D applications, radiological images can be three-dimensional (e.g., CT, MRI), consisting of many instances within each image. The problem is exacerbated when expert annotations are required for effective pixel-wise labeling, which incurs exorbitant labeling effort and cost. Active learning is an established research domain that aims to reduce labeling workload by prioritizing a subset of informative unlabeled examples to annotate. Our contribution is a cost-effective approach for U-Net 3D models that uses Monte Carlo sampling to analyze pixel-wise uncertainty. Experiments on the AAPM 2017 lung CT segmentation challenge dataset show that our proposed framework can achieve promising segmentation results by using only 42% of the training data.Keywords: image segmentation, active learning, convolutional neural network, 3D U-Net
Procedia PDF Downloads 152