Search results for: image transformation
3589 A Neural Network Classifier for Estimation of the Degree of Infestation by Late Blight on Tomato Leaves
Authors: Gizelle K. Vianna, Gabriel V. Cunha, Gustavo S. Oliveira
Abstract:
Foliage diseases in plants can cause a reduction in both quality and quantity of agricultural production. Intelligent detection of plant diseases is an essential research topic as it may help monitoring large fields of crops by automatically detecting the symptoms of foliage diseases. This work investigates ways to recognize the late blight disease from the analysis of tomato digital images, collected directly from the field. A pair of multilayer perceptron neural network analyzes the digital images, using data from both RGB and HSL color models, and classifies each image pixel. One neural network is responsible for the identification of healthy regions of the tomato leaf, while the other identifies the injured regions. The outputs of both networks are combined to generate the final classification of each pixel from the image and the pixel classes are used to repaint the original tomato images by using a color representation that highlights the injuries on the plant. The new images will have only green, red or black pixels, if they came from healthy or injured portions of the leaf, or from the background of the image, respectively. The system presented an accuracy of 97% in detection and estimation of the level of damage on the tomato leaves caused by late blight.Keywords: artificial neural networks, digital image processing, pattern recognition, phytosanitary
Procedia PDF Downloads 3303588 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Authors: Elham Bagheri, Yalda Mohsenzadeh
Abstract:
Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception
Procedia PDF Downloads 923587 Brand Equity Tourism Destinations: An Application in Wine Regions Comparing Visitors' and Managers' Perspectives
Abstract:
The concept of brand equity in the wine tourism area is an interesting topic to explore the factors that determine it. The aim of this study is to address this gap by investigating wine tourism destinations brand equity, and understanding the impact that the denomination of origin (DO) brand image and the destination image have on brand equity. Managing and monitoring the branding of wine tourism destinations is crucial to attract tourist arrivals. The multiplicity of stakeholders involved in the branding process calls for research that, unlike previous studies, adopts a broader perspective and incorporates an internal and an external perspective. Therefore, this gap by comparing managers’ and visitors’ approaches to wine tourism destination brand equity has been addressed. A survey questionnaire for data collection purposes was used. The hypotheses were tested using winery managers and winery visitors, each leading a different position relative to the wine tourism destination brand equity. All the interviews were conducted face-to-face. The survey instrument included several scales related to DO brand image, destination image, and wine tourism destination brand equity. All items were measured on seven-point Likert scales. Partial least squares was used to analyze the accuracy of scales, the structural model, and multi-group analysis to identify the differences in the path coefficients and to test the hypotheses. The results show that the positive influence of DO brand image on wine tourism destination brand equity is stronger for wineries than for visitors, but there are no significant differences between the two groups. However, there are significant differences in the positive effect of destination brand image on both wine tourism destination brand equity and DO brand image. The results of this study are important for consultants, practitioners, and policy makers. The gap between managers and visitors calls for the development of a number of campaigns to enhance the image that visitors hold and, thus, increase tourist arrivals. Events such as wine gatherings and gastronomic symposiums held at universities and culinary schools and participation in business meetings can enhance the perceptions and in turn, the added value, brand equity of the wine tourism destinations. The images of destinations and DOs can help strengthen the brand equity of the wine tourism destinations, especially for visitors. Thus, the development and reinforcement of favorable, strong, and unique destination associations and DO associations are important to increase that value. Joint campaigns are advisable to enhance the images of destinations and DOs and, as a consequence, the value of the wine tourism destination brand.Keywords: brand equity, managers, visitors, wine tourism
Procedia PDF Downloads 1343586 Computational Cell Segmentation in Immunohistochemically Image of Meningioma Tumor Using Fuzzy C-Means and Adaptive Vector Directional Filter
Authors: Vahid Anari, Leila Shahmohammadi
Abstract:
Diagnosing and interpreting manually from a large cohort dataset of immunohistochemically stained tissue of tumors using an optical microscope involves subjectivity and also is tedious for pathologist specialists. Moreover, digital pathology today represents more of an evolution than a revolution in pathology. In this paper, we develop and test an unsupervised algorithm that can automatically enhance the IHC image of a meningioma tumor and classify cells into positive (proliferative) and negative (normal) cells. A dataset including 150 images is used to test the scheme. In addition, a new adaptive color image enhancement method is proposed based on a vector directional filter (VDF) and statistical properties of filtering the window. Since the cells are distinguishable by the human eye, the accuracy and stability of the algorithm are quantitatively compared through application to a wide variety of real images.Keywords: digital pathology, cell segmentation, immunohistochemically, noise reduction
Procedia PDF Downloads 683585 Medical Images Enhancement Using New Dynamic Band Pass Filter
Authors: Abdellatif Baba
Abstract:
In order to facilitate medical images analysis by improving their quality and readability, we present in this paper a new dynamic band pass filter as a general and suitable operator for different types of medical images. Our objective is to enrich the details of any treated medical image to make it sufficiently clear enough to give an understood and simplified meaning even for unspecialized people in the medical domain.Keywords: medical image enhancement, dynamic band pass filter, analysis improvement
Procedia PDF Downloads 2903584 Optoelectronic Hardware Architecture for Recurrent Learning Algorithm in Image Processing
Authors: Abdullah Bal, Sevdenur Bal
Abstract:
This paper purposes a new type of hardware application for training of cellular neural networks (CNN) using optical joint transform correlation (JTC) architecture for image feature extraction. CNNs require much more computation during the training stage compare to test process. Since optoelectronic hardware applications offer possibility of parallel high speed processing capability for 2D data processing applications, CNN training algorithm can be realized using Fourier optics technique. JTC employs lens and CCD cameras with laser beam that realize 2D matrix multiplication and summation in the light speed. Therefore, in the each iteration of training, JTC carries more computation burden inherently and the rest of mathematical computation realized digitally. The bipolar data is encoded by phase and summation of correlation operations is realized using multi-object input joint images. Overlapping properties of JTC are then utilized for summation of two cross-correlations which provide less computation possibility for training stage. Phase-only JTC does not require data rearrangement, electronic pre-calculation and strict system alignment. The proposed system can be incorporated simultaneously with various optical image processing or optical pattern recognition techniques just in the same optical system.Keywords: CNN training, image processing, joint transform correlation, optoelectronic hardware
Procedia PDF Downloads 5073583 Development of an Image-Based Biomechanical Model for Assessment of Hip Fracture Risk
Authors: Masoud Nasiri Sarvi, Yunhua Luo
Abstract:
Low-trauma hip fracture, usually caused by fall from standing height, has become a main source of morbidity and mortality for the elderly. Factors affecting hip fracture include sex, race, age, body weight, height, body mass distribution, etc., and thus, hip fracture risk in fall differs widely from subject to subject. It is therefore necessary to develop a subject-specific biomechanical model to predict hip fracture risk. The objective of this study is to develop a two-level, image-based, subject-specific biomechanical model consisting of a whole-body dynamics model and a proximal-femur finite element (FE) model for more accurately assessing the risk of hip fracture in lateral falls. Required information for constructing the model is extracted from a whole-body and a hip DXA (Dual Energy X-ray Absorptiometry) image of the subject. The proposed model considers all parameters subject-specifically, which will provide a fast, accurate, and non-expensive method for predicting hip fracture risk.Keywords: bone mineral density, hip fracture risk, impact force, sideways falls
Procedia PDF Downloads 5363582 Exploiting JPEG2000 into Reversible Information
Authors: Te-Jen Chang, I-Hui Pan, Kuang-Hsiung Tan, Shan-Jen Cheng, Chien-Wu Lan, Chih-Chan Hu
Abstract:
With the event of multimedia age in order to protect data not to be tampered, damaged, and faked, information hiding technologies are proposed. Information hiding means important secret information is hidden into cover multimedia and then camouflaged media is produced. This camouflaged media has the characteristic of natural protection. Under the undoubted situation, important secret information is transmitted out.Reversible information hiding technologies for high capacity is proposed in this paper. The gray images are as cover media in this technology. We compress gray images and compare with the original image to produce the estimated differences. By using the estimated differences, expression information hiding is used, and higher information capacity can be achieved. According to experimental results, the proposed technology can be approved. For these experiments, the whole capacity of information payload and image quality can be satisfied.Keywords: cover media, camouflaged media, reversible information hiding, gray image
Procedia PDF Downloads 3293581 The Effect of Closed Circuit Television Image Patch Layout on Performance of a Simulated Train-Platform Departure Task
Authors: Aaron J. Small, Craig A. Fletcher
Abstract:
This study investigates the effect of closed circuit television (CCTV) image patch layout on performance of a simulated train-platform departure task. The within-subjects experimental design measures target detection rate and response latency during a CCTV visual search task conducted as part of the procedure for safe train dispatch. Three interface designs were developed by manipulating CCTV image patch layout. Eye movements, perceived workload and system usability were measured across experimental conditions. Task performance was compared to identify significant differences between conditions. The results of this study have not been determined.Keywords: rail human factors, workload, closed circuit television, platform departure, attention, information processing, interface design
Procedia PDF Downloads 1683580 Contradictive Representation of Women in Postfeminist Japanese Media
Authors: Emiko Suzuki
Abstract:
Although some claim that we are in a post-feminist society, the word “postfeminism” still raises questions to many. In postfeminist media, as a British sociologist Rosalind Gill points out, on the one hand, it seems to promote an empowering image of women who are active, positively sexually motivated, has free will to make market choices, and have surveillance and discipline for their personality and body, yet on the other hand, such beautiful and attractive feminist image imposes stronger surveillance of their mind and body for women. Similar representation, which is that femininity is described in a contradictive way, is seen in Japanese media as well. This study tries to capture how post-feminist Japanese media is, contrary to its ostensible messages, encouraging women to join the obedience to the labor system by affirming the traditional image of attractive women using sexual objectification and promoting values of neoliberalism. The result shows an interesting insight into how Japanese media is creating a conflicting ideal representation of women through repeatedly exposing such images.Keywords: postfeminism, Japanese media, sexual objectification, embodiment
Procedia PDF Downloads 1963579 Morphological Transformation of Traditional Cities: The Case Study of the Historic Center of the City of Najaf
Authors: Sabeeh Lafta Farhan, Ihsan Abbass Jasim, Sohaib Kareem Al-Mamoori
Abstract:
This study addresses the subject of transformation of urban structures and how does this transformation affect the character of traditional cities, which represents the research issue. Hence, the research has aimed at studying and learning about the urban structure characteristics and morphological transformation features in the traditional cities centers, and to look for means and methods to preserve the character of those cities. Cities are not merely locations inhabited by a large number of people, they are political and legal entities, in addition to economic activities that distinguish these cities, thus, they are a complex set of institutions, and the transformation in urban environment cannot be recognized without understanding these relationships. The research presumes an existing impact of urbanization on the properties of traditional structure of the Holy City of Najaf. The research has defined urbanization as restructuring and re-planning of urban areas that have lost their functions and bringing them into social and cultural life in the city, to be able to serve economy in order to better respond to the needs of users. Sacred Cities provide the organic connection between acts of worship and dealings and reveal the mechanisms and reasons behind the regulatory nature of the sacred shrine and their role in achieving organizational assimilation of urban morphology. The research has reached a theoretical framework of the particulars of urbanization. This framework has been applied to the historic center of the old city of Najaf, where the most important findings of the research were that the visual and structural dominant presence of holy shrine of Imam Ali (peace be upon him) remains to emphasize the visual particularity, and the main role of the city, which hosts one of the most important Muslim shrines in the world, in addition to the visible golden dome rising above the skyline, and the Imam Ali Mosque the hub and the center for religious activities. Thus, in view of being a place of main importance and a symbol of religious and Islamic culture, it is very important to have the shrine of Imam Ali (AS) prevailing on all zones of re-development in the old city. Consequently, the research underlined that the distinctive and unique character of the city of Najaf did not proceed from nothing, but was achieved through the unrivaled characteristics and features possessed by the city of Najaf alone, which allowed it and enabled it to occupy this status among the Arab and Muslim cities. That is why the activities arising from the development have to enhance the historical role of the city in order to have this development as clear support, strength and further addition to the city assets and its cultural heritage, and not seeing the developmental activities crushing the city urban traditional fabric, cultural heritage and its historical specificity.Keywords: Iraq, the city of Najaf, heritage, traditional cities, morphological transformation
Procedia PDF Downloads 3143578 The Mediating Role of Bank Image in Customer Satisfaction Building
Abstract:
The main objective of this research was to determine the dimensions of service quality in the banking industry of Iran. For this purpose, the study empirically examined the European perspective suggesting that service quality consists of three dimensions, technical, functional and image. This research is an applied research and its strategy is casual strategy. A standard questionnaire was used for collecting the data. 287 customers of Melli Bank of Northwest were selected through cluster sampling and were studied. The results from a banking service sample revealed that the overall service quality is influenced more by a consumer’s perception of technical quality than functional quality. Accordingly, the Gronroos model is a more appropriate representation of service quality than the American perspective with its limited concentration on the dimension of functional quality in the banking industry of Iran. So, knowing the key dimensions of the quality of services in this industry and planning for their improvement can increase the satisfaction of customers and productivity of this industry.Keywords: technical quality, functional quality, banking, image, mediating role
Procedia PDF Downloads 3703577 Narrating 1968: Felipe Cazals’ Canoa (1976) and Images of Massacre
Authors: Nancy Elizabeth Naranjo Garcia
Abstract:
Canoa (1976) by Felipe Cazals is a film that exposes the consequences of power that the Mexican State exercised over the 1968 student movement. The film, in this particular way, approaches the Tlatelolco Massacre from a point of view that takes into consideration the events that led up to it. Nonetheless, the reference to the political tension in Canoa remains ambiguous. Thus, the cinematographic representation refers to an event that leaves space for reflection, and as a consequence leaves evidence of an image that signals the notion of survival as Georges Didi-Huberman points out. In addition to denouncing the oppressive force by the Mexican State, the images in Canoa also emphasize what did not happen in Tlatelolco and its condensation with the student activists. To observe the images that Canoa offers in a new light, this work proposes further exploration with the following questions; How do the images in Canoa narrate? How are the images inserted in the film? In this fashion, a more profound comprehension of the objective and the essence of the images becomes feasible. As a result, it is possible to analyze the images of Canoa with the real killing at San Miguel Canoa in literature. The film visualizes a testimony of the event that once seemed unimaginable, an image that anticipates and structures the proceeding event. Therefore, this study takes a second look at how Canoa considers not only the killing at San Miguel Canoa and the Tlatlelolco Massacre, but goes further on contextualize an unimaginable image.Keywords: cinematographic representation, student movement, Tlatelolco Massacre, unimaginable image
Procedia PDF Downloads 2223576 Transformation of Aluminum Unstable Oxyhydroxides in Ultrafine α-Al2O3 in Presence of Various Seeds
Authors: T. Kuchukhidze, N. Jalagonia, Z. Phachulia, R. Chedia
Abstract:
Ceramic obtained on the base of aluminum oxide has wide application range, because it has unique properties, for example, wear-resistance, dielectric characteristics, exploitation ability at high temperatures and in corrosive atmosphere. Low temperature synthesis of α-Al2O3 is energo-economical process and it is actual for developing technologies of corundum ceramics fabrication. In the present work possibilities of low temperature transformation of oxyhydroxides in α-Al2O3, during a presence of small amount of rare–earth elements compounds (also Th, Re), have been discussed. Aluminium unstable oxyhydroxides have been obtained by hydrolysis of aluminium isopropoxide, nitrates, sulphate, chloride in alkaline environment at 80-90ºC tempertures. β-Al(OH)3 has been received from aluminium powder by ultrasonic development. Drying of oxyhydroxide sol has been conducted with presence of various types seeds, which amount reaches 0,1-0,2% (mas). Neodymium, holmium, thorium, lanthanum, cerium, gadolinium, disprosium nitrates and rhenium carbonyls have been used as seeds and they have been added to the sol specimens in amount of 0.1-0.2% (mas) calculated on metals. Annealing of obtained gels is carried out at 70 – 1100ºC for 2 hrs. The same specimen transforms in α-Al2O3 at 1100ºC. At this temperature in case of presence of lanthanum and gadolinium transformation takes place by 70-85%. In case of presence of thorium stabilization of γ-and θ-phases takes place. It is established, that thorium causes inhibition of α-phase generation at 1100ºC, at the time in all other doped specimens α-phase is generated at lower temperatures (1000-1050ºC). During the work the following devices have been used: X-ray difractometer DRON-3M (Cu-Kα, Ni filter, 2º/min), High temperature vacuum furnace OXY-GON, electronic scanning microscopes Nikon ECLIPSE LV 150, NMM-800TRF, planetary mill Pulverisette 7 premium line, SHIMADZU Dynamic Ultra Micro Hardness Tester, DUH-211S, Analysette 12 Dyna sizer.Keywords: α-Alumina, combustion, phase transformation, seeding
Procedia PDF Downloads 3953575 Reliable Soup: Reliable-Driven Model Weight Fusion on Ultrasound Imaging Classification
Authors: Shuge Lei, Haonan Hu, Dasheng Sun, Huabin Zhang, Kehong Yuan, Jian Dai, Yan Tong
Abstract:
It remains challenging to measure reliability from classification results from different machine learning models. This paper proposes a reliable soup optimization algorithm based on the model weight fusion algorithm Model Soup, aiming to improve reliability by using dual-channel reliability as the objective function to fuse a series of weights in the breast ultrasound classification models. Experimental results on breast ultrasound clinical datasets demonstrate that reliable soup significantly enhances the reliability of breast ultrasound image classification tasks. The effectiveness of the proposed approach was verified via multicenter trials. The results from five centers indicate that the reliability optimization algorithm can enhance the reliability of the breast ultrasound image classification model and exhibit low multicenter correlation.Keywords: breast ultrasound image classification, feature attribution, reliability assessment, reliability optimization
Procedia PDF Downloads 863574 Iris Detection on RGB Image for Controlling Side Mirror
Authors: Norzalina Othman, Nurul Na’imy Wan, Azliza Mohd Rusli, Wan Noor Syahirah Meor Idris
Abstract:
Iris detection is a process where the position of the eyes is extracted from the face images. It is a current method used for many applications such as for security purpose and drowsiness detection. This paper proposes the use of eyes detection in controlling side mirror of motor vehicles. The eyes detection method aims to make driver easy to adjust the side mirrors automatically. The system will determine the midpoint coordinate of eyes detection on RGB (color) image and the input signal from y-coordinate will send it to controller in order to rotate the angle of side mirror on vehicle. The eye position was cropped and the coordinate of midpoint was successfully detected from the circle of iris detection using Viola Jones detection and circular Hough transform methods on RGB image. The coordinate of midpoint from the experiment are tested using controller to determine the angle of rotation on the side mirrors.Keywords: iris detection, midpoint coordinates, RGB images, side mirror
Procedia PDF Downloads 4243573 Mastering Digital Transformation with the Strategy Tandem Innovation Inside-Out/Outside-In: An Approach to Drive New Business Models, Services and Products in the Digital Age
Authors: S. N. Susenburger, D. Boecker
Abstract:
In the age of Volatility, Uncertainty, Complexity, and Ambiguity (VUCA), where digital transformation is challenging long standing traditional hardware and manufacturing companies, innovation needs a different methodology, strategy, mindset, and culture. What used to be a mindset of scaling per quantity is now shifting to orchestrating ecosystems, platform business models and service bundles. While large corporations are trying to mimic the nimbleness and versatile mindset of startups in the core of their digital strategies, they’re at the frontier of facing one of the largest organizational and cultural changes in history. This paper elaborates on how a manufacturing giant transformed its Corporate Information Technology (IT) to enable digital and Internet of Things (IoT) business while establishing the mindset and the approaches of the Innovation Inside-Out/Outside-In Strategy. It gives insights into the core elements of an innovation culture and the tactics and methodologies leveraged to support the cultural shift and transformation into an IoT company. This paper also outlines the core elements for an innovation culture and how the persona 'Connected Engineer' thrives in the digital innovation environment. Further, it explores how tapping domain-focused ecosystems in vibrant innovative cities can be used as a part of the strategy to facilitate partner co-innovation. Therefore, findings from several use cases, observations and surveys led to conclusion for the strategy tandem of Innovation Inside-Out/Outside-In. The findings indicate that it's crucial in which phases and maturity level the Innovation Inside-Out/Outside-In Strategy is activated: cultural aspects of the business and the regional ecosystem need to be considered, as well as cultural readiness from management and active contributors. The 'not invented here syndrome' is a barrier of large corporations that need to be addressed and managed to successfully drive partnerships, as well as embracing co-innovation and a mindset shifting away from physical products toward new business models, services, and IoT platforms. This paper elaborates on various methodologies and approaches tested in different countries and cultures, including the U.S., Brazil, Mexico, and Germany.Keywords: innovation management, innovation culture, innovation methodologies, digital transformation
Procedia PDF Downloads 1493572 A Model-Driven Approach of User Interface for MVP Rich Internet Application
Authors: Sarra Roubi, Mohammed Erramdani, Samir Mbarki
Abstract:
This paper presents an approach for the model-driven generating of Rich Internet Application (RIA) focusing on the graphical aspect. We used well known Model-Driven Engineering (MDE) frameworks and technologies, such as Eclipse Modeling Framework (EMF), Graphical Modeling Framework (GMF), Query View Transformation (QVTo) and Acceleo to enable the design and the code automatic generation of the RIA. During the development of the approach, we focused on the graphical aspect of the application in terms of interfaces while opting for the Model View Presenter pattern that is designed for graphics interfaces. The paper describes the process followed to define the approach, the supporting tool and presents the results from a case study.Keywords: metamodel, model-driven engineering, MVP, rich internet application, transformation, user interface
Procedia PDF Downloads 3543571 Fracture Crack Monitoring Using Digital Image Correlation Technique
Authors: B. G. Patel, A. K. Desai, S. G. Shah
Abstract:
The main of objective of this paper is to develop new measurement technique without touching the object. DIC is advance measurement technique use to measure displacement of particle with very high accuracy. This powerful innovative technique which is used to correlate two image segments to determine the similarity between them. For this study, nine geometrically similar beam specimens of different sizes with (steel fibers and glass fibers) and without fibers were tested under three-point bending in a closed loop servo-controlled machine with crack mouth opening displacement control with a rate of opening of 0.0005 mm/sec. Digital images were captured before loading (unreformed state) and at different instances of loading and were analyzed using correlation techniques to compute the surface displacements, crack opening and sliding displacements, load-point displacement, crack length and crack tip location. It was seen that the CMOD and vertical load-point displacement computed using DIC analysis matches well with those measured experimentally.Keywords: Digital Image Correlation, fibres, self compacting concrete, size effect
Procedia PDF Downloads 3903570 Clothes Identification Using Inception ResNet V2 and MobileNet V2
Authors: Subodh Chandra Shakya, Badal Shrestha, Suni Thapa, Ashutosh Chauhan, Saugat Adhikari
Abstract:
To tackle our problem of clothes identification, we used different architectures of Convolutional Neural Networks. Among different architectures, the outcome from Inception ResNet V2 and MobileNet V2 seemed promising. On comparison of the metrices, we observed that the Inception ResNet V2 slightly outperforms MobileNet V2 for this purpose. So this paper of ours proposes the cloth identifier using Inception ResNet V2 and also contains the comparison between the outcome of ResNet V2 and MobileNet V2. The document here contains the results and findings of the research that we performed on the DeepFashion Dataset. To improve the dataset, we used different image preprocessing techniques like image shearing, image rotation, and denoising. The whole experiment was conducted with the intention of testing the efficiency of convolutional neural networks on cloth identification so that we could develop a reliable system that is good enough in identifying the clothes worn by the users. The whole system can be integrated with some kind of recommendation system.Keywords: inception ResNet, convolutional neural net, deep learning, confusion matrix, data augmentation, data preprocessing
Procedia PDF Downloads 1883569 Addressing the Exorbitant Cost of Labeling Medical Images with Active Learning
Authors: Saba Rahimi, Ozan Oktay, Javier Alvarez-Valle, Sujeeth Bharadwaj
Abstract:
Successful application of deep learning in medical image analysis necessitates unprecedented amounts of labeled training data. Unlike conventional 2D applications, radiological images can be three-dimensional (e.g., CT, MRI), consisting of many instances within each image. The problem is exacerbated when expert annotations are required for effective pixel-wise labeling, which incurs exorbitant labeling effort and cost. Active learning is an established research domain that aims to reduce labeling workload by prioritizing a subset of informative unlabeled examples to annotate. Our contribution is a cost-effective approach for U-Net 3D models that uses Monte Carlo sampling to analyze pixel-wise uncertainty. Experiments on the AAPM 2017 lung CT segmentation challenge dataset show that our proposed framework can achieve promising segmentation results by using only 42% of the training data.Keywords: image segmentation, active learning, convolutional neural network, 3D U-Net
Procedia PDF Downloads 1563568 A Performance Analysis Study for Cloud Based ERP Systems
Authors: Burak Erkayman
Abstract:
The manufacturing and service organizations are in the need of using ERP systems to integrate many functions from purchasing to storage, production planning to calculation of costs. Using ERP systems by the integration in the level of information provides companies remarkable advantages in terms of profitability, productivity and efficiency in processes. Cloud computing is one of the most significant changes in information and communication technology. The developments in Cloud Computing attract business world to take advantage of this field. Cloud Computing means much more storage area, more cost saving and faster data transfer rate. In addition to these, it presents new business models, new field of study and practicable solutions for anyone’s use. These developments make inevitable the implementation of ERP systems to cloud environment. In this study, the performance of ERP systems in cloud environment is analyzed through various performance criteria and a comparison between traditional and cloud-ERP systems is presented. At the end of study the transformation and the future of ERP systems is discussed.Keywords: cloud-ERP, ERP system performance, information system transformation
Procedia PDF Downloads 5303567 Looking beyond Lynch's Image of a City
Authors: Sandhya Rao
Abstract:
Kevin Lynch’s Theory on Imeageability, let on explore a city in terms of five elements, Nodes, Paths, Edges, landmarks and Districts. What happens when we try to record the same data in an Indian context? What happens when we apply the same theory of Imageability to a complex shifting urban pattern of the Indian cities and how can we as Urban Designers demonstrate our role in the image building ordeal of these cities? The organizational patterns formed through mental images, of an Indian city is often diverse and intangible. It is also multi layered and temporary in terms of the spirit of the place. The pattern of images formed is loaded with associative meaning and intrinsically linked with the history and socio-cultural dominance of the place. The embedded memory of a place in one’s mind often plays an even more important role while formulating these images. Thus while deriving an image of a city one is often confused or finds the result chaotic. The images formed due to its complexity are further difficult to represent using a single medium. Under such a scenario it’s difficult to derive an output of an image constructed as well as make design interventions to enhance the legibility of a place. However, there can be a combination of tools and methods that allows one to record the key elements of a place through time, space and one’s user interface with the place. There has to be a clear understanding of the participant groups of a place and their time and period of engagement with the place as well. How we can translate the result obtained into a design intervention at the end, is the main of the research. Could a multi-faceted cognitive mapping be an answer to this or could it be a very transient mapping method which can change over time, place and person. How does the context influence the process of image building in one’s mind? These are the key questions that this research will aim to answer.Keywords: imageability, organizational patterns, legibility, cognitive mapping
Procedia PDF Downloads 3143566 Detection and Classification of Mammogram Images Using Principle Component Analysis and Lazy Classifiers
Authors: Rajkumar Kolangarakandy
Abstract:
Feature extraction and selection is the primary part of any mammogram classification algorithms. The choice of feature, attribute or measurements have an important influence in any classification system. Discrete Wavelet Transformation (DWT) coefficients are one of the prominent features for representing images in frequency domain. The features obtained after the decomposition of the mammogram images using wavelet transformations have higher dimension. Even though the features are higher in dimension, they were highly correlated and redundant in nature. The dimensionality reduction techniques play an important role in selecting the optimum number of features from the higher dimension data, which are highly correlated. PCA is a mathematical tool that reduces the dimensionality of the data while retaining most of the variation in the dataset. In this paper, a multilevel classification of mammogram images using reduced discrete wavelet transformation coefficients and lazy classifiers is proposed. The classification is accomplished in two different levels. In the first level, mammogram ROIs extracted from the dataset is classified as normal and abnormal types. In the second level, all the abnormal mammogram ROIs is classified into benign and malignant too. A further classification is also accomplished based on the variation in structure and intensity distribution of the images in the dataset. The Lazy classifiers called Kstar, IBL and LWL are used for classification. The classification results obtained with the reduced feature set is highly promising and the result is also compared with the performance obtained without dimension reduction.Keywords: PCA, wavelet transformation, lazy classifiers, Kstar, IBL, LWL
Procedia PDF Downloads 3353565 Transformation of the Postindustrial City - The Conversion of a Smelter in Restaurant with a Panoramic Views
Authors: Martina Perinkova, Lenka Kolarcikova, Marketa Twrda
Abstract:
In Ostrava there are a lot of former post-industrial areas and areas that have gradually through conversions and their subsequent reuse. One of the largest is the national cultural monument Lower Vítkovice area where there is a large complex transformation of the former iron production. Industrial heritage today visited by tourists for entertainment, culture, history, sports and other activities. This is a unique example of reuse of technical monuments and introduction of new life into the historic area. The main task of not only find the right function and use, in terms of re integration into city life and finding a balance between history and current lifestyle, looking at the history of the area and its technical condition before reconstruction. It is not only very expensive but also time consuming. Transformations industrial monument is the result of a dialogue architect, the idea of the investor and expert opinion heritage institute.Keywords: post-industrial area, cultural monument, conversions
Procedia PDF Downloads 3413564 A Fast Parallel and Distributed Type-2 Fuzzy Algorithm Based on Cooperative Mobile Agents Model for High Performance Image Processing
Authors: Fatéma Zahra Benchara, Mohamed Youssfi, Omar Bouattane, Hassan Ouajji, Mohamed Ouadi Bensalah
Abstract:
The aim of this paper is to present a distributed implementation of the Type-2 Fuzzy algorithm in a parallel and distributed computing environment based on mobile agents. The proposed algorithm is assigned to be implemented on a SPMD (Single Program Multiple Data) architecture which is based on cooperative mobile agents as AVPE (Agent Virtual Processing Element) model in order to improve the processing resources needed for performing the big data image segmentation. In this work we focused on the application of this algorithm in order to process the big data MRI (Magnetic Resonance Images) image of size (n x m). It is encapsulated on the Mobile agent team leader in order to be split into (m x n) pixels one per AVPE. Each AVPE perform and exchange the segmentation results and maintain asynchronous communication with their team leader until the convergence of this algorithm. Some interesting experimental results are obtained in terms of accuracy and efficiency analysis of the proposed implementation, thanks to the mobile agents several interesting skills introduced in this distributed computational model.Keywords: distributed type-2 fuzzy algorithm, image processing, mobile agents, parallel and distributed computing
Procedia PDF Downloads 4293563 Potassium-Phosphorus-Nitrogen Detection and Spectral Segmentation Analysis Using Polarized Hyperspectral Imagery and Machine Learning
Authors: Nicholas V. Scott, Jack McCarthy
Abstract:
Military, law enforcement, and counter terrorism organizations are often tasked with target detection and image characterization of scenes containing explosive materials in various types of environments where light scattering intensity is high. Mitigation of this photonic noise using classical digital filtration and signal processing can be difficult. This is partially due to the lack of robust image processing methods for photonic noise removal, which strongly influence high resolution target detection and machine learning-based pattern recognition. Such analysis is crucial to the delivery of reliable intelligence. Polarization filters are a possible method for ambient glare reduction by allowing only certain modes of the electromagnetic field to be captured, providing strong scene contrast. An experiment was carried out utilizing a polarization lens attached to a hyperspectral imagery camera for the purpose of exploring the degree to which an imaged polarized scene of potassium, phosphorus, and nitrogen mixture allows for improved target detection and image segmentation. Preliminary imagery results based on the application of machine learning algorithms, including competitive leaky learning and distance metric analysis, to polarized hyperspectral imagery, suggest that polarization filters provide a slight advantage in image segmentation. The results of this work have implications for understanding the presence of explosive material in dry, desert areas where reflective glare is a significant impediment to scene characterization.Keywords: explosive material, hyperspectral imagery, image segmentation, machine learning, polarization
Procedia PDF Downloads 1423562 Evaluation of Robust Feature Descriptors for Texture Classification
Authors: Jia-Hong Lee, Mei-Yi Wu, Hsien-Tsung Kuo
Abstract:
Texture is an important characteristic in real and synthetic scenes. Texture analysis plays a critical role in inspecting surfaces and provides important techniques in a variety of applications. Although several descriptors have been presented to extract texture features, the development of object recognition is still a difficult task due to the complex aspects of texture. Recently, many robust and scaling-invariant image features such as SIFT, SURF and ORB have been successfully used in image retrieval and object recognition. In this paper, we have tried to compare the performance for texture classification using these feature descriptors with k-means clustering. Different classifiers including K-NN, Naive Bayes, Back Propagation Neural Network , Decision Tree and Kstar were applied in three texture image sets - UIUCTex, KTH-TIPS and Brodatz, respectively. Experimental results reveal SIFTS as the best average accuracy rate holder in UIUCTex, KTH-TIPS and SURF is advantaged in Brodatz texture set. BP neuro network works best in the test set classification among all used classifiers.Keywords: texture classification, texture descriptor, SIFT, SURF, ORB
Procedia PDF Downloads 3713561 Evaluation of Longitudinal Relaxation Time (T1) of Bone Marrow in Lumbar Vertebrae of Leukaemia Patients Undergoing Magnetic Resonance Imaging
Authors: M. G. R. S. Perera, B. S. Weerakoon, L. P. G. Sherminie, M. L. Jayatilake, R. D. Jayasinghe, W. Huang
Abstract:
The aim of this study was to measure and evaluate the Longitudinal Relaxation Times (T1) in bone marrow of an Acute Myeloid Leukaemia (AML) patient in order to explore the potential for a prognostic biomarker using Magnetic Resonance Imaging (MRI) which will be a non-invasive prognostic approach to AML. MR image data were collected in the DICOM format and MATLAB Simulink software was used in the image processing and data analysis. For quantitative MRI data analysis, Region of Interests (ROI) on multiple image slices were drawn encompassing vertebral bodies of L3, L4, and L5. T1 was evaluated using the T1 maps obtained. The estimated bone marrow mean value of T1 was 790.1 (ms) at 3T. However, the reported T1 value of healthy subjects is significantly (946.0 ms) higher than the present finding. This suggests that the T1 for bone marrow can be considered as a potential prognostic biomarker for AML patients.Keywords: acute myeloid leukaemia, longitudinal relaxation time, magnetic resonance imaging, prognostic biomarker.
Procedia PDF Downloads 5313560 Unequal Error Protection of VQ Image Transmission System
Authors: Khelifi Mustapha, A. Moulay lakhdar, I. Elawady
Abstract:
We will study the unequal error protection for VQ image. We have used the Reed Solomon (RS) Codes as Channel coding because they offer better performance in terms of channel error correction over a binary output channel. One such channel (binary input and output) should be considered if it is the case of the application layer, because it includes all the features of the layers located below and on the what it is usually not feasible to make changes.Keywords: vector quantization, channel error correction, Reed-Solomon channel coding, application
Procedia PDF Downloads 365