Search results for: image dictionary creation
4113 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms
Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier
Abstract:
Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability
Procedia PDF Downloads 1064112 Methodological Proposal, Archival Thesaurus in Colombian Sign Language
Authors: Pedro A. Medina-Rios, Marly Yolie Quintana-Daza
Abstract:
Having the opportunity to communicate in a social, academic and work context is very relevant for any individual and more for a deaf person when oral language is not their natural language, and written language is their second language. Currently, in Colombia, there is not a specialized dictionary for our best knowledge in sign language archiving. Archival is one of the areas that the deaf community has a greater chance of performing. Nourishing new signs in dictionaries for deaf people extends the possibility that they have the appropriate signs to communicate and improve their performance. The aim of this work was to illustrate the importance of designing pedagogical and technological strategies of knowledge management, for the academic inclusion of deaf people through proposals of lexicon in Colombian sign language (LSC) in the area of archival. As a method, the analytical study was used to identify relevant words in the technical area of the archival and its counterpart with the LSC, 30 deaf people, apprentices - students of the Servicio Nacional de Aprendizaje (SENA) in Documentary or Archival Management programs, were evaluated through direct interviews in LSC. For the analysis tools were maintained to evaluate correlation patterns and linguistic methods of visual, gestural analysis and corpus; besides, methods of linear regression were used. Among the results, significant data were found among the variables socioeconomic stratum, academic level, labor location. The need to generate new signals on the subject of the file to improve communication between the deaf person, listener and the sign language interpreter. It is concluded that the generation of new signs to nourish the LSC dictionary in archival subjects is necessary to improve the labor inclusion of deaf people in Colombia.Keywords: archival, inclusion, deaf, thesaurus
Procedia PDF Downloads 2784111 A Novel Approach of Secret Communication Using Douglas-Peucker Algorithm
Authors: R. Kiruthika, A. Kannan
Abstract:
Steganography is the problem of hiding secret messages in 'innocent – looking' public communication so that the presence of the secret message cannot be detected. This paper introduces a steganographic security in terms of computational in-distinguishability from a channel of probability distributions on cover messages. This method first splits the cover image into two separate blocks using Douglas – Peucker algorithm. The text message and the image will be hided in the Least Significant Bit (LSB) of the cover image.Keywords: steganography, lsb, embedding, Douglas-Peucker algorithm
Procedia PDF Downloads 3634110 Sampling Two-Channel Nonseparable Wavelets and Its Applications in Multispectral Image Fusion
Authors: Bin Liu, Weijie Liu, Bin Sun, Yihui Luo
Abstract:
In order to solve the problem of lower spatial resolution and block effect in the fusion method based on separable wavelet transform in the resulting fusion image, a new sampling mode based on multi-resolution analysis of two-channel non separable wavelet transform, whose dilation matrix is [1,1;1,-1], is presented and a multispectral image fusion method based on this kind of sampling mode is proposed. Filter banks related to this kind of wavelet are constructed, and multiresolution decomposition of the intensity of the MS and panchromatic image are performed in the sampled mode using the constructed filter bank. The low- and high-frequency coefficients are fused by different fusion rules. The experiment results show that this method has good visual effect. The fusion performance has been noted to outperform the IHS fusion method, as well as, the fusion methods based on DWT, IHS-DWT, IHS-Contourlet transform, and IHS-Curvelet transform in preserving both spectral quality and high spatial resolution information. Furthermore, when compared with the fusion method based on nonsubsampled two-channel non separable wavelet, the proposed method has been observed to have higher spatial resolution and good global spectral information.Keywords: image fusion, two-channel sampled nonseparable wavelets, multispectral image, panchromatic image
Procedia PDF Downloads 4404109 An Approach for Reducing Morphological Operator Dataset and Recognize Optical Character Based on Significant Features
Authors: Ashis Pradhan, Mohan P. Pradhan
Abstract:
Pattern Matching is useful for recognizing character in a digital image. OCR is one such technique which reads character from a digital image and recognizes them. Line segmentation is initially used for identifying character in an image and later refined by morphological operations like binarization, erosion, thinning, etc. The work discusses a recognition technique that defines a set of morphological operators based on its orientation in a character. These operators are further categorized into groups having similar shape but different orientation for efficient utilization of memory. Finally the characters are recognized in accordance with the occurrence of frequency in hierarchy of significant pattern of those morphological operators and by comparing them with the existing database of each character.Keywords: binary image, morphological patterns, frequency count, priority, reduction data set and recognition
Procedia PDF Downloads 4134108 Digital Value Co-Creation: The Case of Worthy a Virtual Collaborative Museum across Europe
Authors: Camilla Marini, Deborah Agostino
Abstract:
Cultural institutions provide more than service-based offers; indeed, they are experience-based contexts. A cultural experience is a special event that encompasses a wide range of values which, for visitors, are primarily cultural rather than economic and financial. Cultural institutions have always been characterized by inclusivity and participatory practices, but the upcoming of digital technologies has put forward their interest in collaborative practices and the relationship with their audience. Indeed, digital technologies highly affected the cultural experience as it was conceived. Especially, museums, as traditional and authoritative cultural institutions, have been highly challenged by digital technologies. They shifted by a collection-oriented toward a visitor-centered approach, and digital technologies generated a highly interactive ecosystem in which visitors have an active role, shaping their own cultural experience. Most of the studies that investigate value co-creation in museums adopt a single perspective which is separately one of the museums or one of the users, but the analysis of the convergence/divergence of these perspectives is still emphasized. Additionally, many contributions focus on digital value co-creation as an outcome rather than as a process. The study aims to provide a joint perspective on digital value co-creation which include both museum and visitors. Also, it deepens the contribution of digital technologies in the value co-creation process, addressing the following research questions: (i) what are the convergence/divergence drivers on digital value co-creation and (ii) how digital technologies can be means of value co-creation? The study adopts an action research methodology that is based on the case of WORTHY, an educational project which involves cultural institutions and schools all around Europe, creating a virtual collaborative museum. It represents a valuable case for the aim of the study since it has digital technologies at its core, and the interaction through digital technologies is fundamental, all along with the experience. Action research has been identified as the most appropriate methodology for researchers to have direct contact with the field. Data have been collected through primary and secondary sources. Cultural mediators such as museums, teachers and students’ families have been interviewed, while a focus group has been designed to interact with students, investigating all the aspects of the cultural experience. Secondary sources encompassed project reports and website contents in order to deepen the perspective of cultural institutions. Preliminary findings highlight the dimensions of digital value co-creation in cultural institutions from a museum-visitor integrated perspective and the contribution of digital technologies in the value co-creation process. The study outlines a two-folded contribution that encompasses both an academic and a practitioner level. Indeed, it contributes to fulfilling the gap in cultural management literature about the convergence/divergence of service provider-user perspectives but it also provides cultural professionals with guidelines on how to evaluate the digital value co-creation process.Keywords: co-creation, digital technologies, museum, value
Procedia PDF Downloads 1474107 Role of Radiologic Technologist Specialist in Plain Image Interpretation of Adults in the Middle East: A Radiologist’s Perspective
Authors: Awad Mohamed Elkhadir, Rajab M. Ben Yousef
Abstract:
Background/Aim: Radiological technologists are medical professionals who perform diagnostic imaging tests such as X-rays, magnetic resonance imaging (MRI) scans, and computer tomography (CT) scans. Despite the recognition of image interpretation by British radiologists, it is still considered a problem in the Arab world. This study evaluates the perceptions of radiologists in the Middle East concerning the plain image interpretation of adults by radiologic technologist specialists. Methods: This is a cross-sectional study that follows a quantitative approach. A close-ended questionnaire was distributed among 103 participants who were radiologists by profession from various hospitals in Saudi Arabia and Sudan. The gathered data was then analyzed through Statistical Package for Social Sciences (SPSS). Results: The results showed that 29% recognized the Radiologic Technologist Specialist (RTS) role of writing image reports, while 61% did not. A total of 38% of participants believed that RTS image interpretation would help diagnose unreported radiographs. 47% of the sample responded that the workload and stress on radiologists would reduce by allowing reporting for RTS, while 37% did not. Lastly, 43% believe that image interpretation by RTS can be introduced into the Middle East in the future. Conclusion: The study's findings reveal that the combination of image reporting and radiography improves the care of the patients. The study's outcomes also show that the burden of the medical practitioners reduces due to image reporting of the radiographers. Further researches need to be conducted in the Arab World to obtain and measure the associated factors of the desired criteria.Keywords: Arab world, image interpretation, radiographer, radiologist, Saudi Arabia, Sudan
Procedia PDF Downloads 1004106 Image Steganography Using Predictive Coding for Secure Transmission
Authors: Baljit Singh Khehra, Jagreeti Kaur
Abstract:
In this paper, steganographic strategy is used to hide the text file inside an image. To increase the storage limit, predictive coding is utilized to implant information. In the proposed plan, one can exchange secure information by means of predictive coding methodology. The predictive coding produces high stego-image. The pixels are utilized to insert mystery information in it. The proposed information concealing plan is powerful as contrasted with the existing methodologies. By applying this strategy, a provision helps clients to productively conceal the information. Entropy, standard deviation, mean square error and peak signal noise ratio are the parameters used to evaluate the proposed methodology. The results of proposed approach are quite promising.Keywords: cryptography, steganography, reversible image, predictive coding
Procedia PDF Downloads 4174105 Cross-Cultural Collaboration Shaping Co-Creation Methodology to Enhance Disaster Risk Management Approaches
Authors: Jeannette Anniés, Panagiotis Michalis, Chrysoula Papathanasiou, Selby Knudsen
Abstract:
RiskPACC project aims to bring together researchers, practitioners, and first responders from nine European countries following a co-creation approach aiming to develop customised solutions to meet the needs of end-users. The co-creation workshops target to enhance the communication pathways between local civil protection authorities (CPAs) and citizens, in an effort to close the risk perception-action gap (RPAG). The participants in the workshops include a variety of stakeholders, as well as citizens, fostering the dialogue between the groups and supporting citizen participation in disaster risk management (DRM). The co-creation methodology in place implements co-design elements due to the integration of four ICT tools. Such ICT tools include web-based and mobile application technical solutions in different development stages, ranging from formulation and validation of concepts to pilot demonstrations. In total, seven different case studies are foreseen in RiskPACC. The workflow of the workshops is designed to be adaptive to every of the seven case study countries and their cultures’ particular needs. This work aims to provide an overview of the the preparation and the conduction of the workshops in which researchers and practitioners focused on mapping these different needs from the end users. The latter included first responders but also volunteers and citizens who actively participated in the co-creation workshops. The strategies to improve communication between CPAs and citizens themselves differ in the countries, and the modules of the co-creation methodology are adapted in response to such differences. Moreover, the project partners experienced how the structure of such workshops is perceived differently in the seven case studies. Therefore, the co-creation methodology itself is a design method underlying several iterations, which are eventually shaped by cross-cultural collaboration. For example, some case studies applied other modules according to the participatory group recruited. The participants were technical experts, teachers, citizens, first responders, or volunteers, among others. This work aspires to present the divergent approaches of the seven case studies implementing the co-creation methodology proposed, in response to different perceptions of the modules. An analysis of the adaptations and implications will also be provided to assess where the case studies’ objective of improving disaster resilience has been obtained.Keywords: citizen participation, co-creation, disaster resilience, risk perception, ICT tools
Procedia PDF Downloads 884104 Factors Influencing the Development and Implementation of Radiology Technologist Specialist Role in Image Interpretation in Sudan
Authors: Awad Elkhadir, Rajab M. Ben Yousef
Abstract:
Introduction: The production of high-quality medical images by radiology technologists is useful in diagnosing and treating various injuries and diseases. However, the factors affecting the role of radiology technologists in image interpretation in Sudan have not been investigated widely. Methods: Cross-sectional study has been employed by recruiting ten radiology college deans in Sudan. The questionnaire was distributed online, and obtained data were analyzed using Microsoft Excel and IBM-SPSS version 16.0 to generate descriptive statistics. Results: The study results have shown that half of the deans were doubtful about the readiness of Sudan to implement the role of radiology technologist specialist in image interpretation. The majority of them (60%) believed that this issue had been most strongly pushed by researchers over the past decade. The factors affecting the implementation of the radiology technologist specialist role in image interpretation included; education/training (100%), recognition (30%), technical issues (30%), people-related issues (20%), management changes (30%), government role (30%), costs (10%), and timings (20%). Conclusion: The study concluded that there is a need for a change in image interpretation by radiology technologists in Sudan.Keywords: development, image interpretation, implementation, radiology technologist specialist, Sudan
Procedia PDF Downloads 884103 The Quantum Theory of Music and Human Languages
Authors: Mballa Abanda Luc Aurelien Serge, Henda Gnakate Biba, Kuate Guemo Romaric, Akono Rufine Nicole, Zabotom Yaya Fadel Biba, Petfiang Sidonie, Bella Suzane Jenifer
Abstract:
The main hypotheses proposed around the definition of the syllable and of music, of the common origin of music and language, should lead the reader to reflect on the cross-cutting questions raised by the debate on the notion of universals in linguistics and musicology. These are objects of controversy, and there lies its interest: the debate raises questions that are at the heart of theories on language. It is an inventive, original, and innovative research thesis. A contribution to the theoretical, musicological, ethno musicological, and linguistic conceptualization of languages, giving rise to the practice of interlocution between the social and cognitive sciences, the activities of artistic creation, and the question of modeling in the human sciences: mathematics, computer science, translation automation, and artificial intelligence. When you apply this theory to any text of a folksong of a world-tone language, you do not only piece together the exact melody, rhythm, and harmonies of that song as if you knew it in advance but also the exact speaking of this language. The author believes that the issue of the disappearance of tonal languages and their preservation has been structurally resolved, as well as one of the greatest cultural equations related to the composition and creation of tonal, polytonal, and random music. The experimentation confirming the theorization, I designed a semi-digital, semi-analog application that translates the tonal languages of Africa (about 2,100 languages) into blues, jazz, world music, polyphonic music, tonal and anatonal music, and deterministic and random music). To test this application, I use music reading and writing software that allows me to collect the data extracted from my mother tongue, which is already modeled in the musical staves saved in the ethnographic (semiotic) dictionary for automatic translation ( volume 2 of the book). The translation is done (from writing to writing, from writing to speech, and from writing to music). Mode of operation: you type a text on your computer, a structured song (chorus-verse), and you command the machine a melody of blues, jazz, and world music or variety, etc. The software runs, giving you the option to choose harmonies, and then you select your melody.Keywords: language, music, sciences, quantum entenglement
Procedia PDF Downloads 774102 Small Businesses as Vehicles for Job Creation in North-West Nigeria
Authors: Mustapha Shitu Suleiman, Francis Neshamba, Nestor Valero-Silva
Abstract:
Small businesses are considered as engine of economic growth, contributing to employment generation, wealth creation, and poverty alleviation and food security in both developed and developing countries. Nigeria is facing many socio-economic problems and it is believed that by supporting small business development, as propellers of new ideas and more effective users of resources, often driven by individual creativity and innovation, Nigeria would be able to address some of its economic and social challenges, such as unemployment and economic diversification. Using secondary literature, this paper examines the role small businesses can play in the creation of jobs in North-West Nigeria to overcome issues of unemployment, which is the most devastating economic challenge facing the region. Most studies in this area have focused on Nigeria as a whole and only a few studies provide a regional focus, hence, this study will contribute to knowledge by filling this gap by concentrating on North-West Nigeria. It is hoped that with the present administration’s determination to improve the economy, small businesses would be used as vehicles for diversification of the economy away from crude oil to create jobs that would lead to a reduction in the country’s high unemployment level.Keywords: job creation, north-west, Nigeria, small business, unemployment
Procedia PDF Downloads 3074101 Medical Image Watermark and Tamper Detection Using Constant Correlation Spread Spectrum Watermarking
Authors: Peter U. Eze, P. Udaya, Robin J. Evans
Abstract:
Data hiding can be achieved by Steganography or invisible digital watermarking. For digital watermarking, both accurate retrieval of the embedded watermark and the integrity of the cover image are important. Medical image security in Teleradiology is one of the applications where the embedded patient record needs to be extracted with accuracy as well as the medical image integrity verified. In this research paper, the Constant Correlation Spread Spectrum digital watermarking for medical image tamper detection and accurate embedded watermark retrieval is introduced. In the proposed method, a watermark bit from a patient record is spread in a medical image sub-block such that the correlation of all watermarked sub-blocks with a spreading code, W, would have a constant value, p. The constant correlation p, spreading code, W and the size of the sub-blocks constitute the secret key. Tamper detection is achieved by flagging any sub-block whose correlation value deviates by more than a small value, ℇ, from p. The major features of our new scheme include: (1) Improving watermark detection accuracy for high-pixel depth medical images by reducing the Bit Error Rate (BER) to Zero and (2) block-level tamper detection in a single computational process with simultaneous watermark detection, thereby increasing utility with the same computational cost.Keywords: Constant Correlation, Medical Image, Spread Spectrum, Tamper Detection, Watermarking
Procedia PDF Downloads 1944100 Design and Implementation of Partial Denoising Boundary Image Matching Using Indexing Techniques
Authors: Bum-Soo Kim, Jin-Uk Kim
Abstract:
In this paper, we design and implement a partial denoising boundary image matching system using indexing techniques. Converting boundary images to time-series makes it feasible to perform fast search using indexes even on a very large image database. Thus, using this converting method we develop a client-server system based on the previous partial denoising research in the GUI (graphical user interface) environment. The client first converts a query image given by a user to a time-series and sends denoising parameters and the tolerance with this time-series to the server. The server identifies similar images from the index by evaluating a range query, which is constructed using inputs given from the client, and sends the resulting images to the client. Experimental results show that our system provides much intuitive and accurate matching result.Keywords: boundary image matching, indexing, partial denoising, time-series matching
Procedia PDF Downloads 1374099 Deepnic, A Method to Transform Each Variable into Image for Deep Learning
Authors: Nguyen J. M., Lucas G., Brunner M., Ruan S., Antonioli D.
Abstract:
Deep learning based on convolutional neural networks (CNN) is a very powerful technique for classifying information from an image. We propose a new method, DeepNic, to transform each variable of a tabular dataset into an image where each pixel represents a set of conditions that allow the variable to make an error-free prediction. The contrast of each pixel is proportional to its prediction performance and the color of each pixel corresponds to a sub-family of NICs. NICs are probabilities that depend on the number of inputs to each neuron and the range of coefficients of the inputs. Each variable can therefore be expressed as a function of a matrix of 2 vectors corresponding to an image whose pixels express predictive capabilities. Our objective is to transform each variable of tabular data into images into an image that can be analysed by CNNs, unlike other methods which use all the variables to construct an image. We analyse the NIC information of each variable and express it as a function of the number of neurons and the range of coefficients used. The predictive value and the category of the NIC are expressed by the contrast and the color of the pixel. We have developed a pipeline to implement this technology and have successfully applied it to genomic expressions on an Affymetrix chip.Keywords: tabular data, deep learning, perfect trees, NICS
Procedia PDF Downloads 894098 Monocular Visual Odometry for Three Different View Angles by Intel Realsense T265 with the Measurement of Remote
Authors: Heru Syah Putra, Aji Tri Pamungkas Nurcahyo, Chuang-Jan Chang
Abstract:
MOIL-SDK method refers to the spatial angle that forms a view with a different perspective from the Fisheye image. Visual Odometry forms a trusted application for extending projects by tracking using image sequences. A real-time, precise, and persistent approach that is able to contribute to the work when taking datasets and generate ground truth as a reference for the estimates of each image using the FAST Algorithm method in finding Keypoints that are evaluated during the tracking process with the 5-point Algorithm with RANSAC, as well as produce accurate estimates the camera trajectory for each rotational, translational movement on the X, Y, and Z axes.Keywords: MOIL-SDK, intel realsense T265, Fisheye image, monocular visual odometry
Procedia PDF Downloads 1344097 Particle Swarm Optimization Algorithm vs. Genetic Algorithm for Image Watermarking Based Discrete Wavelet Transform
Authors: Omaima N. Ahmad AL-Allaf
Abstract:
Over communication networks, images can be easily copied and distributed in an illegal way. The copyright protection for authors and owners is necessary. Therefore, the digital watermarking techniques play an important role as a valid solution for authority problems. Digital image watermarking techniques are used to hide watermarks into images to achieve copyright protection and prevent its illegal copy. Watermarks need to be robust to attacks and maintain data quality. Therefore, we discussed in this paper two approaches for image watermarking, first is based on Particle Swarm Optimization (PSO) and the second approach is based on Genetic Algorithm (GA). Discrete wavelet transformation (DWT) is used with the two approaches separately for embedding process to cover image transformation. Each of PSO and GA is based on co-relation coefficient to detect the high energy coefficient watermark bit in the original image and then hide the watermark in original image. Many experiments were conducted for the two approaches with different values of PSO and GA parameters. From experiments, PSO approach got better results with PSNR equal 53, MSE equal 0.0039. Whereas GA approach got PSNR equal 50.5 and MSE equal 0.0048 when using population size equal to 100, number of iterations equal to 150 and 3×3 block. According to the results, we can note that small block size can affect the quality of image watermarking based PSO/GA because small block size can increase the search area of the watermarking image. Better PSO results were obtained when using swarm size equal to 100.Keywords: image watermarking, genetic algorithm, particle swarm optimization, discrete wavelet transform
Procedia PDF Downloads 2264096 A Hybrid Image Fusion Model for Generating High Spatial-Temporal-Spectral Resolution Data Using OLI-MODIS-Hyperion Satellite Imagery
Authors: Yongquan Zhao, Bo Huang
Abstract:
Spatial, Temporal, and Spectral Resolution (STSR) are three key characteristics of Earth observation satellite sensors; however, any single satellite sensor cannot provide Earth observations with high STSR simultaneously because of the hardware technology limitations of satellite sensors. On the other hand, a conflicting circumstance is that the demand for high STSR has been growing with the remote sensing application development. Although image fusion technology provides a feasible means to overcome the limitations of the current Earth observation data, the current fusion technologies cannot enhance all STSR simultaneously and provide high enough resolution improvement level. This study proposes a Hybrid Spatial-Temporal-Spectral image Fusion Model (HSTSFM) to generate synthetic satellite data with high STSR simultaneously, which blends the high spatial resolution from the panchromatic image of Landsat-8 Operational Land Imager (OLI), the high temporal resolution from the multi-spectral image of Moderate Resolution Imaging Spectroradiometer (MODIS), and the high spectral resolution from the hyper-spectral image of Hyperion to produce high STSR images. The proposed HSTSFM contains three fusion modules: (1) spatial-spectral image fusion; (2) spatial-temporal image fusion; (3) temporal-spectral image fusion. A set of test data with both phenological and land cover type changes in Beijing suburb area, China is adopted to demonstrate the performance of the proposed method. The experimental results indicate that HSTSFM can produce fused image that has good spatial and spectral fidelity to the reference image, which means it has the potential to generate synthetic data to support the studies that require high STSR satellite imagery.Keywords: hybrid spatial-temporal-spectral fusion, high resolution synthetic imagery, least square regression, sparse representation, spectral transformation
Procedia PDF Downloads 2354095 Automatic Multi-Label Image Annotation System Guided by Firefly Algorithm and Bayesian Method
Authors: Saad M. Darwish, Mohamed A. El-Iskandarani, Guitar M. Shawkat
Abstract:
Nowadays, the amount of available multimedia data is continuously on the rise. The need to find a required image for an ordinary user is a challenging task. Content based image retrieval (CBIR) computes relevance based on the visual similarity of low-level image features such as color, textures, etc. However, there is a gap between low-level visual features and semantic meanings required by applications. The typical method of bridging the semantic gap is through the automatic image annotation (AIA) that extracts semantic features using machine learning techniques. In this paper, a multi-label image annotation system guided by Firefly and Bayesian method is proposed. Firstly, images are segmented using the maximum variance intra cluster and Firefly algorithm, which is a swarm-based approach with high convergence speed, less computation rate and search for the optimal multiple threshold. Feature extraction techniques based on color features and region properties are applied to obtain the representative features. After that, the images are annotated using translation model based on the Net Bayes system, which is efficient for multi-label learning with high precision and less complexity. Experiments are performed using Corel Database. The results show that the proposed system is better than traditional ones for automatic image annotation and retrieval.Keywords: feature extraction, feature selection, image annotation, classification
Procedia PDF Downloads 5864094 A Robust Digital Image Watermarking Against Geometrical Attack Based on Hybrid Scheme
Authors: M. Samadzadeh Mahabadi, J. Shanbehzadeh
Abstract:
This paper presents a hybrid digital image-watermarking scheme, which is robust against varieties of attacks and geometric distortions. The image content is represented by important feature points obtained by an image-texture-based adaptive Harris corner detector. These feature points are extracted from LL2 of 2-D discrete wavelet transform which are obtained by using the Harris-Laplacian detector. We calculate the Fourier transform of circular regions around these points. The amplitude of this transform is rotation invariant. The experimental results demonstrate the robustness of the proposed method against the geometric distortions and various common image processing operations such as JPEG compression, colour reduction, Gaussian filtering, median filtering, and rotation.Keywords: digital watermarking, geometric distortions, geometrical attack, Harris Laplace, important feature points, rotation, scale invariant feature
Procedia PDF Downloads 5014093 Image Compression Based on Regression SVM and Biorthogonal Wavelets
Authors: Zikiou Nadia, Lahdir Mourad, Ameur Soltane
Abstract:
In this paper, we propose an effective method for image compression based on SVM Regression (SVR), with three different kernels, and biorthogonal 2D Discrete Wavelet Transform. SVM regression could learn dependency from training data and compressed using fewer training points (support vectors) to represent the original data and eliminate the redundancy. Biorthogonal wavelet has been used to transform the image and the coefficients acquired are then trained with different kernels SVM (Gaussian, Polynomial, and Linear). Run-length and Arithmetic coders are used to encode the support vectors and its corresponding weights, obtained from the SVM regression. The peak signal noise ratio (PSNR) and their compression ratios of several test images, compressed with our algorithm, with different kernels are presented. Compared with other kernels, Gaussian kernel achieves better image quality. Experimental results show that the compression performance of our method gains much improvement.Keywords: image compression, 2D discrete wavelet transform (DWT-2D), support vector regression (SVR), SVM Kernels, run-length, arithmetic coding
Procedia PDF Downloads 3814092 Bank Liquidity Creation in a Dual Banking System: An Empirical Investigation
Authors: Lianne M. Q. Lee, Mohammed Sharaf Shaiban
Abstract:
The importance of bank liquidity management took center stage as policy makers promoted a more resilient global banking system after the market turmoil of 2007. The growing recognition of Islamic banks’ function of intermediating funds in the economy warrants the need to investigate its balance sheet structure which is distinct from its conventional counterparts. Given that asymmetric risk, transformation is inevitable; Islamic banks need to identify the liquidity risk within their distinctive balance sheet structure. Thus, there is a strong need to quantify and assess the liquidity position to ensure proper functioning of a financial institution. It is vital to measure bank liquidity because liquid banks face less liquidity risk. We examine this issue by using two alternative quantitative measures of liquidity creation “cat fat” and “cat nonfat” constructed by Berger and Bouwman (2009). “Cat fat” measures all on balance sheet items including off balance sheet, whilst the latter measures only on balance sheet items. Liquidity creation is measured over the period 2007-2014 in 14 countries where Islamic and conventional commercial banks coexist. Also, separately by bank size class as empirical studies have shown that liquidity creation varies by bank size. An interesting and important finding shows that all size class of Islamic banks, on average have increased creation of aggregate liquidity in real dollar terms over the years for both liquidity creation measures especially for large banks indicating that Islamic banks actually generates more liquidity to the economy compared to its conventional counterparts, including from off-balance sheet items. The liquidity creation for off-balance sheets by conventional banks may have been affected by the global financial crisis when derivatives markets were severely hit. The results also suggest that Islamic banks have the higher volume of assets and deposits and that borrowing/issues of bonds are less in Islamic banks compared to conventional banks because most products are interest-based. As Islamic banks appear to create more liquidity than conventional banks under both measures, it translates that the development of Islamic banking is significant over the decades since its inception. This finding is encouraging as, despite Islamic banking’s overall size, it represents growth opportunities for these countries.Keywords: financial institution, liquidity creation, liquidity risk, policy and regulation
Procedia PDF Downloads 3494091 Image-Based (RBG) Technique for Estimating Phosphorus Levels of Different Crops
Authors: M. M. Ali, Ahmed Al- Ani, Derek Eamus, Daniel K. Y. Tan
Abstract:
In this glasshouse study, we developed the new image-based non-destructive technique for detecting leaf P status of different crops such as cotton, tomato and lettuce. Plants were allowed to grow on nutrient media containing different P concentrations, i.e. 0%, 50% and 100% of recommended P concentration (P0 = no P, L; P1 = 2.5 mL 10 L-1 of P and P2 = 5 mL 10 L-1 of P as NaH2PO4). After 10 weeks of growth, plants were harvested and data on leaf P contents were collected using the standard destructive laboratory method and at the same time leaf images were collected by a handheld crop image sensor. We calculated leaf area, leaf perimeter and RGB (red, green and blue) values of these images. This data was further used in the linear discriminant analysis (LDA) to estimate leaf P contents, which successfully classified these plants on the basis of leaf P contents. The data indicated that P deficiency in crop plants can be predicted using the image and morphological data. Our proposed non-destructive imaging method is precise in estimating P requirements of different crop species.Keywords: image-based techniques, leaf area, leaf P contents, linear discriminant analysis
Procedia PDF Downloads 3804090 An Image Stitching Approach for Scoliosis Analysis
Authors: Siti Salbiah Samsudin, Hamzah Arof, Ainuddin Wahid Abdul Wahab, Mohd Yamani Idna Idris
Abstract:
Standard X-ray spine images produced by conventional screen-film technique have a limited field of view. This limitation may obstruct a complete inspection of the spine unless images of different parts of the spine are placed next to each other contiguously to form a complete structure. Another solution to producing a whole spine image is by assembling the digitized x-ray images of its parts automatically using image stitching. This paper presents a new Medical Image Stitching (MIS) method that utilizes Minimum Average Correlation Energy (MACE) filters to identify and merge pairs of x-ray medical images. The effectiveness of the proposed method is demonstrated in two sets of experiments involving two databases which contain a total of 40 pairs of overlapping and non-overlapping spine images. The experimental results are compared to those produced by the Normalized Cross Correlation (NCC) and Phase Only Correlation (POC) methods for comparison. It is found that the proposed method outperforms those of the NCC and POC methods in identifying both the overlapping and non-overlapping medical images. The efficacy of the proposed method is further vindicated by its average execution time which is about two to five times shorter than those of the POC and NCC methods.Keywords: image stitching, MACE filter, panorama image, scoliosis
Procedia PDF Downloads 4584089 The Influence of Social Media on the Body Image of First Year Female Medical Students of University of Khartoum, 2022
Authors: Razan Farah, Siham Ballah
Abstract:
Facebook, Instagram, TikTok and other social media applications have become an integral component of everyone’s social life, particularly among younger generations and adolescences. These social apps have been changing a lot of conceptions and believes in the population by representing public figures and celebrities as role models. The social comparison theory, which says that people self-evaluate based on comparisons with similar others, is commonly used to explore the impact of social media on body image. There is a need to study the influence of those social platforms on the body image as there have been an increase in body dissatisfaction in the recent years. This cross sectional study used a self administered questionnaire on a simple random sample of 133 female medical students of the first year. Finding shows that the response rate was 75%. There was an association between social media usage and noticing how the person look(p value = .022), but no significant association between social media use and body image influence or dissatisfaction was found. This study implies more research under this topic in Sudan as the literature are scarce.Keywords: body image, body dissatisfaction, social media, adolescences
Procedia PDF Downloads 714088 Internet Memes: A Mirror of Culture and Society
Authors: Alexandra-Monica Toma
Abstract:
As the internet became a ruling force of society, computer-mediated communication has enriched its methods to convey meaning by combining linguistic means to visual means of expressivity. One of the elements of cyberspace is what we call a meme, a succinct, visually engaging tool used to communicate ideas or emotions, usually in a funny or ironic manner. Coined by Richard Dawkings in the late 1970s to refer to cultural genes, this term now denominates a special type of vernacular language used to share content on the internet. This research aims to analyse the basic mechanism that stands at the basis of meme creation as a blend of innovation and imitation and will approach some of the most widely used image macros remixed to generate new content while also pointing out success strategies. Moreover, this paper discusses whether memes can transcend the light-hearted and playful mood they mirror and become biting and sharp cultural comments. The study also uses the concept of multimodality and stresses how the text interacts with image, discussing three types of relations between the two: symmetry, amplification, and contradiction. We will furthermore show that memes are cultural artifacts and virtual tropes highly dependent on context and societal issues by using a corpus of memes created related to the COVID-19 pandemic.Keywords: context, computer-mediated communication, memes, multimodality
Procedia PDF Downloads 1844087 Automatic Algorithm for Processing and Analysis of Images from the Comet Assay
Authors: Yeimy L. Quintana, Juan G. Zuluaga, Sandra S. Arango
Abstract:
The comet assay is a method based on electrophoresis that is used to measure DNA damage in cells and has shown important results in the identification of substances with a potential risk to the human population as innumerable physical, chemical and biological agents. With this technique is possible to obtain images like a comet, in which the tail of these refers to damaged fragments of the DNA. One of the main problems is that the image has unequal luminosity caused by the fluorescence microscope and requires different processing to condition it as well as to know how many optimal comets there are per sample and finally to perform the measurements and determine the percentage of DNA damage. In this paper, we propose the design and implementation of software using Image Processing Toolbox-MATLAB that allows the automation of image processing. The software chooses the optimum comets and measuring the necessary parameters to detect the damage.Keywords: artificial vision, comet assay, DNA damage, image processing
Procedia PDF Downloads 3104086 The "Street Less Traveled": Body Image and Its Relationship with Eating Attitudes, Influence of Media and Self-Esteem among College Students
Authors: Aditya Soni, Nimesh Parikh, R. A. Thakrar
Abstract:
Background: A cross-sectional study looked to focus body image satisfaction, heretofore under investigated arena in our setting. This study additionally examined the relationship of body mass index, influence of media and self-esteem. Our second objective was to assess whether there was any relationship between body image dissatisfaction and gender. Methods: A cross-sectional study using body image satisfaction described in words was undertaken, which also explored relationship with body mass index (BMI), influence of media, self-esteem and other selected co-variables such as socio-demographic details, overall satisfaction in life, and particularly in academic/professional life, current health status using 5-item based Likert scale. Convenience sampling was used to select participants of both genders aged from 17 to 32 on a sample size of 303 participants. Results : The body image satisfaction had significant relationship with Body mass index (P<0.001), eating attitude (P<0.001), influence of media (P<0.001) and self-esteem (P<0.001). Students with low weight had a significantly higher prevalence of body image satisfaction while overweight students had a significantly higher prevalence of dissatisfaction (P<0.001). Females showed more concern about body image as compared to males. Conclusions: Generally, this study reveals that the eating attitude, influence of the media and self-esteem is significantly related to the body image. On an empowering note, this level needs to be saved for overall mental and sound advancement of people. Proactive preventive measures could be started in foundations on identity improvement, acknowledgement of self and individual contrasts while keeping up ideal weight and dynamic life style.Keywords: body image, body mass index, media, self-esteem
Procedia PDF Downloads 5744085 Detecting and Disabling Digital Cameras Using D3CIP Algorithm Based on Image Processing
Authors: S. Vignesh, K. S. Rangasamy
Abstract:
The paper deals with the device capable of detecting and disabling digital cameras. The system locates the camera and then neutralizes it. Every digital camera has an image sensor known as a CCD, which is retro-reflective and sends light back directly to its original source at the same angle. The device shines infrared LED light, which is invisible to the human eye, at a distance of about 20 feet. It then collects video of these reflections with a camcorder. Then the video of the reflections is transferred to a computer connected to the device, where it is sent through image processing algorithms that pick out infrared light bouncing back. Once the camera is detected, the device would project an invisible infrared laser into the camera's lens, thereby overexposing the photo and rendering it useless. Low levels of infrared laser neutralize digital cameras but are neither a health danger to humans nor a physical damage to cameras. We also discuss the simplified design of the above device that can used in theatres to prevent piracy. The domains being covered here are optics and image processing.Keywords: CCD, optics, image processing, D3CIP
Procedia PDF Downloads 3574084 Identification of How Pre-Service Physics Teachers Understand Image Formations through Virtual Objects in the Field of Geometric Optics and Development of a New Material to Exploit Virtual Objects
Authors: Ersin Bozkurt
Abstract:
The aim of the study is to develop materials for understanding image formations through virtual objects in geometric optics. The images in physics course books are formed by using real objects. This results in mistakes in the features of images because of generalizations which leads to conceptual misunderstandings in learning. In this study it was intended to identify pre-service physics teachers misunderstandings arising from false generalizations. Focused group interview was used as a qualitative method. The findings of the study show that students have several misconceptions such as "the image in a plain mirror is always virtual". However a real image can be formed in a plain mirror. To explain a virtual object's image formation in a more understandable way an overhead projector and episcope and their design was illustrated. The illustrations are original and several computer simulations will be suggested.Keywords: computer simulations, geometric optics, physics education, students' misconceptions in physics
Procedia PDF Downloads 404