Search results for: visual features
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5419

Search results for: visual features

4819 Virtual Computing Lab for Phonics Development among Deaf Students

Authors: Ankita R. Bansal, Naren S. Burade

Abstract:

Idea is to create a cloud based virtual lab for Deaf Students, “A language acquisition program using Visual Phonics and Cued Speech” using VMware Virtual Lab. This lab will demonstrate students the sounds of letters associated with the Language, building letter blocks, making words, etc Virtual labs are used for demos, training, for the Lingual development of children in their vernacular language. The main potential benefits are reduced labour and hardware costs, faster response times to users. Virtual Computing Labs allows any of the software as a service solutions, virtualization solutions, and terminal services solutions available today to offer as a service on demand, where a single instance of the software runs on the cloud and services multiple end users. VMWare, XEN, MS Virtual Server, Virtuoso, and Citrix are typical examples.

Keywords: visual phonics, language acquisition, vernacular language, cued speech, virtual lab

Procedia PDF Downloads 585
4818 Content Based Face Sketch Images Retrieval in WHT, DCT, and DWT Transform Domain

Authors: W. S. Besbas, M. A. Artemi, R. M. Salman

Abstract:

Content based face sketch retrieval can be used to find images of criminals from their sketches for 'Crime Prevention'. This paper investigates the problem of CBIR of face sketch images in transform domain. Face sketch images that are similar to the query image are retrieved from the face sketch database. Features of the face sketch image are extracted in the spectrum domain of a selected transforms. These transforms are Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), and Walsh Hadamard Transform (WHT). For the performance analyses of features selection methods three face images databases are used. These are 'Sheffield face database', 'Olivetti Research Laboratory (ORL) face database', and 'Indian face database'. The City block distance measure is used to evaluate the performance of the retrieval process. The investigation concludes that, the retrieval rate is database dependent. But in general, the DCT is the best. On the other hand, the WHT is the best with respect to the speed of retrieving images.

Keywords: Content Based Image Retrieval (CBIR), face sketch image retrieval, features selection for CBIR, image retrieval in transform domain

Procedia PDF Downloads 475
4817 Detection of Curvilinear Structure via Recursive Anisotropic Diffusion

Authors: Sardorbek Numonov, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Dongeun Choi, Byung-Woo Hong

Abstract:

The detection of curvilinear structures often plays an important role in the analysis of images. In particular, it is considered as a crucial step for the diagnosis of chronic respiratory diseases to localize the fissures in chest CT imagery where the lung is divided into five lobes by the fissures that are characterized by linear features in appearance. However, the characteristic linear features for the fissures are often shown to be subtle due to the high intensity variability, pathological deformation or image noise involved in the imaging procedure, which leads to the uncertainty in the quantification of anatomical or functional properties of the lung. Thus, it is desired to enhance the linear features present in the chest CT images so that the distinctiveness in the delineation of the lobe is improved. We propose a recursive diffusion process that prefers coherent features based on the analysis of structure tensor in an anisotropic manner. The local image features associated with certain scales and directions can be characterized by the eigenanalysis of the structure tensor that is often regularized via isotropic diffusion filters. However, the isotropic diffusion filters involved in the computation of the structure tensor generally blur geometrically significant structure of the features leading to the degradation of the characteristic power in the feature space. Thus, it is required to take into consideration of local structure of the feature in scale and direction when computing the structure tensor. We apply an anisotropic diffusion in consideration of scale and direction of the features in the computation of the structure tensor that subsequently provides the geometrical structure of the features by its eigenanalysis that determines the shape of the anisotropic diffusion kernel. The recursive application of the anisotropic diffusion with the kernel the shape of which is derived from the structure tensor leading to the anisotropic scale-space where the geometrical features are preserved via the eigenanalysis of the structure tensor computed from the diffused image. The recursive interaction between the anisotropic diffusion based on the geometry-driven kernels and the computation of the structure tensor that determines the shape of the diffusion kernels yields a scale-space where geometrical properties of the image structure are effectively characterized. We apply our recursive anisotropic diffusion algorithm to the detection of curvilinear structure in the chest CT imagery where the fissures present curvilinear features and define the boundary of lobes. It is shown that our algorithm yields precise detection of the fissures while overcoming the subtlety in defining the characteristic linear features. The quantitative evaluation demonstrates the robustness and effectiveness of the proposed algorithm for the detection of fissures in the chest CT in terms of the false positive and the true positive measures. The receiver operating characteristic curves indicate the potential of our algorithm as a segmentation tool in the clinical environment. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).

Keywords: anisotropic diffusion, chest CT imagery, chronic respiratory disease, curvilinear structure, fissure detection, structure tensor

Procedia PDF Downloads 219
4816 Multiplayer RC-car Driving System in a Collaborative Augmented Reality Environment

Authors: Kikuo Asai, Yuji Sugimoto

Abstract:

We developed a prototype system for multiplayer RC-car driving in a collaborative Augmented Reality (AR) environment. The tele-existence environment is constructed by superimposing digital data onto images captured by a camera on an RC-car, enabling players to experience an augmented coexistence of the digital content and the real world. Marker-based tracking was used for estimating position and orientation of the camera. The plural RC-cars can be operated in a field where square markers are arranged. The video images captured by the camera are transmitted to a PC for visual tracking. The RC-cars are also tracked by using an infrared camera attached to the ceiling, so that the instability is reduced in the visual tracking. Multimedia data such as texts and graphics are visualized to be overlaid onto the video images in the geometrically correct manner. The prototype system allows a tele-existence sensation to be augmented in a collaborative AR environment.

Keywords: multiplayer, RC-car, collaborative environment, augmented reality

Procedia PDF Downloads 274
4815 Latest Finding about Copper Sulfide Biomineralization and General Features of Metal Sulfide Biominerals

Authors: Yeseul Park

Abstract:

Biopolymers produced by organisms highly contribute to the production of metal sulfides, both in extracellular and intracellular biomineralization. We discovered a new type of intracellular biomineral composed of copper sulfide in the periplasm of a sulfate-reducing bacterium. We suggest that the structural features of biomineral composed of 1-2 nm subgrains are based on biopolymer-based capping agents and an organic compartment. We further compare with other types of metal sulfide biominerals.

Keywords: biomineralization, copper sulfide, metal sulfide, biopolymer, capping agent

Procedia PDF Downloads 99
4814 The Lived Experience of Risk and Protective Contexts of Blind Successful University Students in Sidist Kilo Campus

Authors: Zelalem Markos Borko

Abstract:

The quality of life of people with blindness is significantly influenced by the level of resilience they possess. A qualitative approach of the descriptive phenomenological design was employed to address basic study objectives. The researcher purposely selected three blind graduate students from Sidist Kilo Campus and conducted a semi-structured interview to gather data. Data were analyzed by using thematic coding techniques. The present study found that personal characteristics such as commitment, living hope, motivation, positive self-esteem, self-confidence, and communication have shaped resiliency for successful university students with visual disabilities. The finding showed that the school environment is the place in which blind students had developed/experienced social, psychological, and economical competency and hope for their academic and entire life success. Furthermore, the finding showed that blind students had experienced individual, family, school, and community-related risks in the success track. Therefore, governmental and non-governmental organizations should provide training for students with visual impairments that focus on the individual traits that shape resilience for academic success, such as commitment, living hope, motivation, positive self-esteem, self-confidence, and communication and also community-oriented training should be to break the social stigma and discriminations for the individuals with the visual impairment.

Keywords: blind students, risk and protective factors, lived experience, success

Procedia PDF Downloads 59
4813 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores

Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan

Abstract:

Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.

Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics

Procedia PDF Downloads 113
4812 Kannada HandWritten Character Recognition by Edge Hinge and Edge Distribution Techniques Using Manhatan and Minimum Distance Classifiers

Authors: C. V. Aravinda, H. N. Prakash

Abstract:

In this paper, we tried to convey fusion and state of art pertaining to SIL character recognition systems. In the first step, the text is preprocessed and normalized to perform the text identification correctly. The second step involves extracting relevant and informative features. The third step implements the classification decision. The three stages which involved are Data acquisition and preprocessing, Feature extraction, and Classification. Here we concentrated on two techniques to obtain features, Feature Extraction & Feature Selection. Edge-hinge distribution is a feature that characterizes the changes in direction of a script stroke in handwritten text. The edge-hinge distribution is extracted by means of a windowpane that is slid over an edge-detected binary handwriting image. Whenever the mid pixel of the window is on, the two edge fragments (i.e. connected sequences of pixels) emerging from this mid pixel are measured. Their directions are measured and stored as pairs. A joint probability distribution is obtained from a large sample of such pairs. Despite continuous effort, handwriting identification remains a challenging issue, due to different approaches use different varieties of features, having different. Therefore, our study will focus on handwriting recognition based on feature selection to simplify features extracting task, optimize classification system complexity, reduce running time and improve the classification accuracy.

Keywords: word segmentation and recognition, character recognition, optical character recognition, hand written character recognition, South Indian languages

Procedia PDF Downloads 477
4811 Animated Poetry-Film: Poetry in Action

Authors: Linette van der Merwe

Abstract:

It is known that visual artists, performing artists, and literary artists have inspired each other since time immemorial. The enduring, symbiotic relationship between the various art genres is evident where words, colours, lines, and sounds act as metaphors, a physical separation of the transcendental reality of art. Simonides of Keos (c. 556-468 BC) confirmed this, stating that a poem is a talking picture, or, in a more modern expression, a picture is worth a thousand words. It can be seen as an ancient relationship, originating from the epigram (tombstone or artefact inscriptions), the carmen figuratum (figure poem), and the ekphrasis (a description in the form of a poem of a work of art). Visual artists, including Michelangelo, Leonardo da Vinci, and Goethe, wrote poems and songs. Goya, Degas, and Picasso are famous for their works of art and for trying their hands at poetry. Afrikaans writers whose fine art is often published together with their writing, as in the case of Andries Bezuidenhout, Breyten Breytenbach, Sheila Cussons, Hennie Meyer, Carina Stander, and Johan van Wyk, among others, are not a strange phenomenon either. Imitating one art form into another art form is a form of translation, transposition, contemplation, and discovery of artistic impressions, showing parallel interpretations rather than physical comparison. It is especially about the harmony that exists between the different art genres, i.e., a poem that describes a painting or a visual text that portrays a poem that becomes a translation, interpretation, and rediscovery of the verbal text, or rather, from the word text to the image text. Poetry-film, as a form of such a translation of the word text into an image text, can be considered a hybrid, transdisciplinary art form that connects poetry and film. Poetry-film is regarded as an intertwined entity of word, sound, and visual image. It is an attempt to transpose and transform a poem into a new artwork that makes the poem more accessible to people who are not necessarily open to the written word and will, in effect, attract a larger audience to a genre that usually has a limited market. Poetry-film is considered a creative expression of an inverted ekphrastic inspiration, a visual description, interpretation, and expression of a poem. Research also emphasises that animated poetry-film is not widely regarded as a genre of anything and is thus severely under-theorized. This paper will focus on Afrikaans animated poetry-films as a multimodal transposition of a poem text to an animated poetry film, with specific reference to animated poetry-films in Filmverse I (2014) and Filmverse II (2016).

Keywords: poetry film, animated poetry film, poetic metaphor, conceptual metaphor, monomodal metaphor, multimodal metaphor, semiotic metaphor, multimodality, metaphor analysis, target domain, source domain

Procedia PDF Downloads 50
4810 Development of an Artificial Neural Network to Measure Science Literacy Leveraging Neuroscience

Authors: Amanda Kavner, Richard Lamb

Abstract:

Faster growth in science and technology of other nations may make staying globally competitive more difficult without shifting focus on how science is taught in US classes. An integral part of learning science involves visual and spatial thinking since complex, and real-world phenomena are often expressed in visual, symbolic, and concrete modes. The primary barrier to spatial thinking and visual literacy in Science, Technology, Engineering, and Math (STEM) fields is representational competence, which includes the ability to generate, transform, analyze and explain representations, as opposed to generic spatial ability. Although the relationship is known between the foundational visual literacy and the domain-specific science literacy, science literacy as a function of science learning is still not well understood. Moreover, the need for a more reliable measure is necessary to design resources which enhance the fundamental visuospatial cognitive processes behind scientific literacy. To support the improvement of students’ representational competence, first visualization skills necessary to process these science representations needed to be identified, which necessitates the development of an instrument to quantitatively measure visual literacy. With such a measure, schools, teachers, and curriculum designers can target the individual skills necessary to improve students’ visual literacy, thereby increasing science achievement. This project details the development of an artificial neural network capable of measuring science literacy using functional Near-Infrared Spectroscopy (fNIR) data. This data was previously collected by Project LENS standing for Leveraging Expertise in Neurotechnologies, a Science of Learning Collaborative Network (SL-CN) of scholars of STEM Education from three US universities (NSF award 1540888), utilizing mental rotation tasks, to assess student visual literacy. Hemodynamic response data from fNIRsoft was exported as an Excel file, with 80 of both 2D Wedge and Dash models (dash) and 3D Stick and Ball models (BL). Complexity data were in an Excel workbook separated by the participant (ID), containing information for both types of tasks. After changing strings to numbers for analysis, spreadsheets with measurement data and complexity data were uploaded to RapidMiner’s TurboPrep and merged. Using RapidMiner Studio, a Gradient Boosted Trees artificial neural network (ANN) consisting of 140 trees with a maximum depth of 7 branches was developed, and 99.7% of the ANN predictions are accurate. The ANN determined the biggest predictors to a successful mental rotation are the individual problem number, the response time and fNIR optode #16, located along the right prefrontal cortex important in processing visuospatial working memory and episodic memory retrieval; both vital for science literacy. With an unbiased measurement of science literacy provided by psychophysiological measurements with an ANN for analysis, educators and curriculum designers will be able to create targeted classroom resources to help improve student visuospatial literacy, therefore improving science literacy.

Keywords: artificial intelligence, artificial neural network, machine learning, science literacy, neuroscience

Procedia PDF Downloads 108
4809 The Positive Effects of Processing Instruction on the Acquisition of French as a Second Language: An Eye-Tracking Study

Authors: Cecile Laval, Harriet Lowe

Abstract:

Processing Instruction is a psycholinguistic pedagogical approach drawing insights from the Input Processing Model which establishes the initial innate strategies used by second language learners to connect form and meaning of linguistic features. With the ever-growing use of technology in Second Language Acquisition research, the present study uses eye-tracking to measure the effectiveness of Processing Instruction in the acquisition of French and its effects on learner’s cognitive strategies. The experiment was designed using a TOBII Pro-TX300 eye-tracker to measure participants’ default strategies when processing French linguistic input and any cognitive changes after receiving Processing Instruction treatment. Participants were drawn from lower intermediate adult learners of French at the University of Greenwich and randomly assigned to two groups. The study used a pre-test/post-test methodology. The pre-tests (one per linguistic item) were administered via the eye-tracker to both groups one week prior to instructional treatment. One group received full Processing Instruction treatment (explicit information on the grammatical item and on the processing strategies, and structured input activities) on the primary target linguistic feature (French past tense imperfective aspect). The second group received Processing Instruction treatment except the explicit information on the processing strategies. Three immediate post-tests on the three grammatical structures under investigation (French past tense imperfective aspect, French Subjunctive used for the expression of doubt, and the French causative construction with Faire) were administered with the eye-tracker. The eye-tracking data showed the positive change in learners’ processing of the French target features after instruction with improvement in the interpretation of the three linguistic features under investigation. 100% of participants in both groups made a statistically significant improvement (p=0.001) in the interpretation of the primary target feature (French past tense imperfective aspect) after treatment. 62.5% of participants made an improvement in the secondary target item (French Subjunctive used for the expression of doubt) and 37.5% of participants made an improvement in the cumulative target feature (French causative construction with Faire). Statistically there was no significant difference between the pre-test and post-test scores in the cumulative target feature; however, the variance approximately tripled between the pre-test and the post-test (3.9 pre-test and 9.6 post-test). This suggests that the treatment does not affect participants homogenously and implies a role for individual differences in the transfer-of-training effect of Processing Instruction. The use of eye-tracking provides an opportunity for the study of unconscious processing decisions made during moment-by-moment comprehension. The visual data from the eye-tracking demonstrates changes in participants’ processing strategies. Gaze plots from pre- and post-tests display participants fixation points changing from focusing on content words to focusing on the verb ending. This change in processing strategies can be clearly seen in the interpretation of sentences in both primary and secondary target features. This paper will present the research methodology, design and results of the experimental study using eye-tracking to investigate the primary effects and transfer-of-training effects of Processing Instruction. It will then provide evidence of the cognitive benefits of Processing Instruction in Second Language Acquisition and offer suggestion in second language teaching of grammar.

Keywords: eye-tracking, language teaching, processing instruction, second language acquisition

Procedia PDF Downloads 266
4808 Improving Coverage in Wireless Sensor Networks Using Particle Swarm Optimization Algorithm

Authors: Ehsan Abdolzadeh, Sanaz Nouri, Siamak Khalaj

Abstract:

Today WSNs have many applications in different fields like the environment, military operations, discoveries, monitoring operations, and so on. Coverage size and energy consumption are the important challenges that these networks need to face. This paper tries to solve the problem of coverage with a requirement of k-coverage and minimum energy consumption. In order to minimize energy consumption, visual sensor networks have been used that observe and process just those targets that are located in their view direction. As a result, sensor rotations have decreased, and subsequently, energy consumption has been minimized. To solve the problem of coverage particle swarm optimization, coverage optimization has been able to ensure coverage requirement together with minimizing sensor rotations while meeting the problem requirement of k≤14. So energy consumption has decreased, and this could extend the sensors’ lifetime subsequently.

Keywords: K coverage, particle union optimization algorithm, wireless sensor networks, visual sensor networks

Procedia PDF Downloads 99
4807 Facial Biometric Privacy Using Visual Cryptography: A Fundamental Approach to Enhance the Security of Facial Biometric Data

Authors: Devika Tanna

Abstract:

'Biometrics' means 'life measurement' but the term is usually associated with the use of unique physiological characteristics to identify an individual. It is important to secure the privacy of digital face image that is stored in central database. To impart privacy to such biometric face images, first, the digital face image is split into two host face images such that, each of it gives no idea of existence of the original face image and, then each cover image is stored in two different databases geographically apart. When both the cover images are simultaneously available then only we can access that original image. This can be achieved by using the XM2VTS and IMM face database, an adaptive algorithm for spatial greyscale. The algorithm helps to select the appropriate host images which are most likely to be compatible with the secret image stored in the central database based on its geometry and appearance. The encryption is done using GEVCS which results in a reconstructed image identical to the original private image.

Keywords: adaptive algorithm, database, host images, privacy, visual cryptography

Procedia PDF Downloads 110
4806 Drawings Reveal Beliefs of Japanese University Students

Authors: Sakae Suzuki

Abstract:

Although Japanese students study English for six years in secondary schools, they demonstrate little success with it when they enter higher education. Learners’ beliefs can predict the future behavior of students, so it may be effective to investigate how learners’ beliefs limit their success and how beliefs might be nudged in a positive direction. While many researchers still depend on a questionnaire called BALLI to reveal explicit beliefs, alternative approaches, especially those designed to reveal implicit beliefs, might be helpful for promoting learning. The present study seeks to identify beliefs with a discursive approach using visual metaphors and narratives. Employing a sociocultural framework, this study investigates how students’ beliefs are revealed by drawings of themselves and their surrounding environments and artifacts while they are engaged in language learning. Research questions are: (1) Can we identify beliefs through an analysis of students’ visual narratives? (2) What environments and artifacts can be found in students’ drawings, and what do they mean? (3) To what extent do students see language learning as a solitary, rather than a social, activity? Participants are university students majoring in science and technology in Japan. The questionnaire was administered to 70 entering students in April, 2014. Data included students drawings of themselves as learners of English as well as written descriptions of students’ backgrounds, English-learning experiences, and analogies and metaphors that they used in written descriptions of themselves as learners. Data will be analyzed qualitatively and quantitatively. Anticipated results include students’ perceptions of themselves as language learners, including their sense of agency, awareness of artifacts, and social contexts of language learning. Comments will be made on implications for teaching, as well as the use of visual narratives as research tools, and recommended further research.

Keywords: drawings, learners' beliefs, metaphors, BALLI

Procedia PDF Downloads 478
4805 Exploring Pisa Monuments Using Mobile Augmented Reality

Authors: Mihai Duguleana, Florin Girbacia, Cristian Postelnicu, Raffaello Brodi, Marcello Carrozzino

Abstract:

Augmented Reality (AR) has taken a big leap with the introduction of mobile applications which co-locate bi-dimensional (e.g. photo, video, text) and tridimensional information with the location of the user enriching his/her experience. This study presents the advantages of using Mobile Augmented Reality (MAR) technologies in traveling applications, improving cultural heritage exploration. We propose a location-based AR application which combines co-location with the augmented visual information about Pisa monuments to establish a friendly navigation in this historic city. AR was used to render contextual visual information in the outdoor environment. The developed Android-based application offers two different options: it provides the ability to identify the monuments positioned close to the user’s position and it offers location information for getting near the key touristic objectives. We present the process of creating the monuments’ 3D map database and the navigation algorithm.

Keywords: augmented reality, electronic compass, GPS, location-based service

Procedia PDF Downloads 269
4804 Still Pictures for Learning Foreign Language Sounds

Authors: Kaoru Tomita

Abstract:

This study explores how visual information helps us to learn foreign language pronunciation. Visual assistance and its effect for learning foreign language have been discussed widely. For example, simplified illustrations in textbooks are used for telling learners which part of the articulation organs are used for pronouncing sounds. Vowels are put into a chart that depicts a vowel space. Consonants are put into a table that contains two axes of place and manner of articulation. When comparing a still picture and a moving picture for visualizing learners’ pronunciation, it becomes clear that the former works better than the latter. The visualization of vowels was applied to class activities in which native and non-native speakers’ English was compared and the learners’ feedback was collected: the positions of six vowels did not scatter as much as they were expected to do. Specifically, two vowels were not discriminated and were arranged very close in the vowel space. It was surprising for the author to find that learners liked analyzing their own pronunciation by linking formant ones and twos on a sheet of paper with a pencil. Even a simple method works well if it leads learners to think about their pronunciation analytically.

Keywords: feedback, pronunciation, visualization, vowel

Procedia PDF Downloads 236
4803 Artistic and Technological Features of Bukhara Copper Embossing in the 20th Century

Authors: Zebiniso Mukhsinova

Abstract:

This article discusses the dynamics of the historical development of the Bukhara school of copper-stamped products. Copper embossing is one of the leading crafts of Uzbek decorative and applied art. A critical and analytical assessment of innovative ideas, artistic and technological features, which arose as a result of the inter-regional synthesis of a local school, is presented. The article includes a detailed analysis of exhibits in museum collections, a research of the scientific papers of leading art critics and differs from previous studies in this area.

Keywords: applied art, copper embossing, metalwork, ewer, tray, Bukhara school

Procedia PDF Downloads 131
4802 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices

Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese

Abstract:

Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.

Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis

Procedia PDF Downloads 162
4801 The Dangers of Attentional Inertia in the Driving Task

Authors: Catherine Thompson, Maryam Jalali, Peter Hills

Abstract:

The allocation of visual attention is critical when driving and anything that limits attention will have a detrimental impact on safety. Engaging in a secondary task reduces the amount of attention directed to the road because drivers allocate resources towards this task, leaving fewer resources to process driving-relevant information. Yet the dangers associated with a secondary task do not end when the driver returns their attention to the road. Instead, the attentional settings adopted to complete a secondary task may persist to the road, affecting attention, and therefore affecting driver performance. This 'attentional inertia' effect was investigated in the current work. Forty drivers searched for hazards in driving video clips while their eye-movements were recorded. At varying intervals they were instructed to attend to a secondary task displayed on a tablet situated to their left-hand side. The secondary task consisted of three separate computer games that induced horizontal, vertical, and random eye movements. Visual search and hazard detection in the driving clips were compared across the three conditions of the secondary task. Results showed that the layout of information in the secondary task, and therefore the allocation of attention in this task, had an impact on subsequent search in the driving clips. Vertically presented information reduced the wide horizontal spread of search usually associated with accurate driving and had a negative influence on the detection of hazards. The findings show the additional dangers of engaging in a secondary task while driving. The attentional inertia effect has significant implications for semi-autonomous and autonomous vehicles in which drivers have greater opportunity to direct their attention away from the driving task.

Keywords: attention, eye-movements, hazard perception, visual search

Procedia PDF Downloads 148
4800 Preprocessing and Fusion of Multiple Representation of Finger Vein patterns using Conventional and Machine Learning techniques

Authors: Tomas Trainys, Algimantas Venckauskas

Abstract:

Application of biometric features to the cryptography for human identification and authentication is widely studied and promising area of the development of high-reliability cryptosystems. Biometric cryptosystems typically are designed for patterns recognition, which allows biometric data acquisition from an individual, extracts feature sets, compares the feature set against the set stored in the vault and gives a result of the comparison. Preprocessing and fusion of biometric data are the most important phases in generating a feature vector for key generation or authentication. Fusion of biometric features is critical for achieving a higher level of security and prevents from possible spoofing attacks. The paper focuses on the tasks of initial processing and fusion of multiple representations of finger vein modality patterns. These tasks are solved by applying conventional image preprocessing methods and machine learning techniques, Convolutional Neural Network (SVM) method for image segmentation and feature extraction. An article presents a method for generating sets of biometric features from a finger vein network using several instances of the same modality. Extracted features sets were fused at the feature level. The proposed method was tested and compared with the performance and accuracy results of other authors.

Keywords: bio-cryptography, biometrics, cryptographic key generation, data fusion, information security, SVM, pattern recognition, finger vein method.

Procedia PDF Downloads 130
4799 Detection and Classification of Mammogram Images Using Principle Component Analysis and Lazy Classifiers

Authors: Rajkumar Kolangarakandy

Abstract:

Feature extraction and selection is the primary part of any mammogram classification algorithms. The choice of feature, attribute or measurements have an important influence in any classification system. Discrete Wavelet Transformation (DWT) coefficients are one of the prominent features for representing images in frequency domain. The features obtained after the decomposition of the mammogram images using wavelet transformations have higher dimension. Even though the features are higher in dimension, they were highly correlated and redundant in nature. The dimensionality reduction techniques play an important role in selecting the optimum number of features from the higher dimension data, which are highly correlated. PCA is a mathematical tool that reduces the dimensionality of the data while retaining most of the variation in the dataset. In this paper, a multilevel classification of mammogram images using reduced discrete wavelet transformation coefficients and lazy classifiers is proposed. The classification is accomplished in two different levels. In the first level, mammogram ROIs extracted from the dataset is classified as normal and abnormal types. In the second level, all the abnormal mammogram ROIs is classified into benign and malignant too. A further classification is also accomplished based on the variation in structure and intensity distribution of the images in the dataset. The Lazy classifiers called Kstar, IBL and LWL are used for classification. The classification results obtained with the reduced feature set is highly promising and the result is also compared with the performance obtained without dimension reduction.

Keywords: PCA, wavelet transformation, lazy classifiers, Kstar, IBL, LWL

Procedia PDF Downloads 321
4798 A Recognition Method of Ancient Yi Script Based on Deep Learning

Authors: Shanxiong Chen, Xu Han, Xiaolong Wang, Hui Ma

Abstract:

Yi is an ethnic group mainly living in mainland China, with its own spoken and written language systems, after development of thousands of years. Ancient Yi is one of the six ancient languages in the world, which keeps a record of the history of the Yi people and offers documents valuable for research into human civilization. Recognition of the characters in ancient Yi helps to transform the documents into an electronic form, making their storage and spreading convenient. Due to historical and regional limitations, research on recognition of ancient characters is still inadequate. Thus, deep learning technology was applied to the recognition of such characters. Five models were developed on the basis of the four-layer convolutional neural network (CNN). Alpha-Beta divergence was taken as a penalty term to re-encode output neurons of the five models. Two fully connected layers fulfilled the compression of the features. Finally, at the softmax layer, the orthographic features of ancient Yi characters were re-evaluated, their probability distributions were obtained, and characters with features of the highest probability were recognized. Tests conducted show that the method has achieved higher precision compared with the traditional CNN model for handwriting recognition of the ancient Yi.

Keywords: recognition, CNN, Yi character, divergence

Procedia PDF Downloads 152
4797 Effect of Common Yoga Protocol on Reaction Time of Football Players

Authors: Vikram Singh

Abstract:

The objective of the study was to study the effectiveness of common yoga protocol on reaction time (simple visual reaction time-SVRT measured in milliseconds/seconds) of male football players in the age group of 15 to 21 years. The 40 boys were randomly assigned into two groups i.e. control and experimental. SVRT for both the groups were measured on day-1 and post intervention (common yoga protocol here) was measured after 45 days of training to the experimental group only. One way ANOVA (Univariate analysis) and Independent t-test using SPSS 23 statistical package was applied to get and analyze the results. There was a significant difference after 45 days of yoga protocol in simple visual reaction time of experimental group (p = .032), t (33.05) = 3.881, p = .000 (two-tailed). Null hypothesis (that there would be no post measurement differences in reaction times of control and experimental groups) was rejected. Where p<.05. Therefore alternate hypothesis was accepted.

Keywords: footballers, t-test, yoga protocol, reaction time

Procedia PDF Downloads 242
4796 Development of a Mobile Image-Based Reminder Application to Support Tuberculosis Treatment in Africa

Authors: Haji Ali Haji, Hussein Suleman, Ulrike Rivett

Abstract:

This paper presents the design, development and evaluation of an application prototype developed to support tuberculosis (TB) patients’ treatment adherence. The system makes use of graphics and voice reminders as opposed to text messaging to encourage patients to follow their medication routine. To evaluate the effect of the prototype applications, participants were given mobile phones on which the reminder system was installed. Thirty-eight people, including TB health workers and patients from Zanzibar, Tanzania, participated in the evaluation exercises. The results indicate that the participants found the mobile graphic-based application is useful to support TB treatment. All participants understood and interpreted the intended meaning of every image correctly. The study findings revealed that the use of a mobile visual-based application may have potential benefit to support TB patients (both literate and illiterate) in their treatment processes.

Keywords: ICT4D, mobile technology, tuberculosis, visual-based reminder

Procedia PDF Downloads 417
4795 Quantitative Wide-Field Swept-Source Optical Coherence Tomography Angiography and Visual Outcomes in Retinal Artery Occlusion

Authors: Yifan Lu, Ying Cui, Ying Zhu, Edward S. Lu, Rebecca Zeng, Rohan Bajaj, Raviv Katz, Rongrong Le, Jay C. Wang, John B. Miller

Abstract:

Purpose: Retinal artery occlusion (RAO) is an ophthalmic emergency that can lead to poor visual outcome and is associated with an increased risk of cerebral stroke and cardiovascular events. Fluorescein angiography (FA) is the traditional diagnostic tool for RAO; however, wide-field swept-source optical coherence tomography angiography (WF SS-OCTA), as a nascent imaging technology, is able to provide quick and non-invasive angiographic information with a wide field of view. In this study, we looked for associations between OCT-A vascular metrics and visual acuity in patients with prior diagnosis of RAO. Methods: Patients with diagnoses of central retinal artery occlusion (CRAO) or branched retinal artery occlusion (BRAO) were included. A 6mm x 6mm Angio and a 15mm x 15mm AngioPlex Montage OCT-A image were obtained for both eyes in each patient using the Zeiss Plex Elite 9000 WF SS-OCTA device. Each 6mm x 6mm image was divided into nine Early Treatment Diabetic Retinopathy Study (ETDRS) subfields. The average measurement of the central foveal subfield, inner ring, and outer ring was calculated for each parameter. Non-perfusion area (NPA) was manually measured using 15mm x 15mm Montage images. A linear regression model was utilized to identify a correlation between the imaging metrics and visual acuity. A P-value less than 0.05 was considered to be statistically significant. Results: Twenty-five subjects were included in the study. For RAO eyes, there was a statistically significant negative correlation between vision and retinal thickness as well as superficial capillary plexus vessel density (SCP VD). A negative correlation was found between vision and deep capillary plexus vessel density (DCP VD) without statistical significance. There was a positive correlation between vision and choroidal thickness as well as choroidal volume without statistical significance. No statistically significant correlation was found between vision and the above metrics in contralateral eyes. For NPA measurements, no significant correlation was found between vision and NPA. Conclusions: This is the first study to our best knowledge to investigate the utility of WF SS-OCTA in RAO and to demonstrate correlations between various retinal vascular imaging metrics and visual outcomes. Further investigations should explore the associations between these imaging findings and cardiovascular risk as RAO patients are at elevated risk for symptomatic stroke. The results of this study provide a basis to understand the structural changes involved in visual outcomes in RAO. Furthermore, they may help guide management of RAO and prevention of cerebral stroke and cardiovascular accidents in patients with RAO.

Keywords: OCTA, swept-source OCT, retinal artery occlusion, Zeiss Plex Elite

Procedia PDF Downloads 120
4794 Multimodality in Storefront Windows: The Impact of Verbo-Visual Design on Consumer Behavior

Authors: Angela Bargenda, Erhard Lick, Dhoha Trabelsi

Abstract:

Research in retailing has identified the importance of atmospherics as an essential element in enhancing store image, store patronage intentions, and the overall shopping experience in a retail environment. However, in the area of atmospherics, store window design, which represents an essential component of external store atmospherics, remains a vastly underrepresented phenomenon in extant scholarship. This paper seeks to fill this gap by exploring the relevance of store window design as an atmospheric tool. In particular, empirical evidence of theme-based theatrical store front windows, which put emphasis on the use of verbo-visual design elements, was found in Paris and New York. The purpose of this study was to identify to what extent such multimodal window designs of high-end department stores in metropolitan cities have an impact on store entry decisions and attitudes towards the retailer’s image. As theoretical construct, the linguistic concept of multimodality and Mehrabian’s and Russell’s model in environmental psychology were applied. To answer the research question, two studies were conducted. For Study 1 a case study approach was selected to define three different types of store window designs based on different types of visual-verbal relations. Each of these types of store window design represented a different level of cognitive elaboration required for the decoding process. Study 2 consisted of an on-line survey carried out among more than 300 respondents to examine the influence of these three types of store window design on the consumer behavioral variables mentioned above. The results of this study show that the higher the cognitive elaboration needed to decode the message of the store window, the lower the store entry propensity. In contrast, the higher the cognitive elaboration, the higher the perceived image of the retailer’s image. One important conclusion is that in order to increase consumers’ propensity to enter stores with theme-based theatrical store front windows, retailers need to limit the cognitive elaboration required to decode their verbo-visual window design.

Keywords: consumer behavior, multimodality, store atmospherics, store window design

Procedia PDF Downloads 179
4793 Visual Outcome After 360-Degree Retinectomy in Total Rhegmatogenous Retinal Detachment with Advanced Proliferative Vitreoretinopathy: A Case Series

Authors: Andriati Nadhilah Widyarini, Ezra Margareth

Abstract:

Introduction: Rhegmatogenous retinal detachment is a condition where there’s a break in the retina, which allows the vitreous to directly enter the subretinal space. Proliferative vitreoretinopathy (PVR) may develop due to this condition and can result in a new break, which could cause traction on the previously detached retina. Various methods of therapy can be done to treat this complication. Case: This case series involved 2 eyes of 2 patients who had total retinal detachment with advanced PVR. Pars plana vitrectomy was performed, and a 360-degree retinectomy procedure with perfluorocarbon liquid usage was done. This was followed by endo laser retinopexy to surround the border of retinectomy. 5000 cs silicone oil was used in 1 eye, whereas 12% of perfluoropropane gas was used in the other eye as a tamponade. These procedures were performed with meticulous attention to prevent any fluid from entering the subretinal space. Postoperative examination showed attachment of the retina and improvement of the patient’s visual acuity. Both eyes’ intraocular pressure was in the normal range. One eye developed retinal displacement, but no other complications occurred. Discussion: Rhegmatogenous retinal detachment with advanced PVR is a complex situation for vitreoretinal surgeons. PVR is characterized by the growth and migration of preretinal or subretinal membranes. PVR is the most common cause of retinal reattachment failure. A 360-degree retinectomy is an alternative surgical method to overcome this condition. Objectives of this procedure are releasing retinal traction caused by PVR, reducing the recurrence rate of PVR, and reattaching the retina to the pigment epithelial surface. Conclusion: 360-degree retinectomy provides satisfactory retinal reattachment and visual outcome improvement in rhegmatogenous retinal detachment with advanced PVR.

Keywords: RRD, retinectomy, pars plana, advanced PVR

Procedia PDF Downloads 34
4792 Epileptic Seizure Prediction by Exploiting Signal Transitions Phenomena

Authors: Mohammad Zavid Parvez, Manoranjan Paul

Abstract:

A seizure prediction method is proposed by extracting global features using phase correlation between adjacent epochs for detecting relative changes and local features using fluctuation/deviation within an epoch for determining fine changes of different EEG signals. A classifier and a regularization technique are applied for the reduction of false alarms and improvement of the overall prediction accuracy. The experiments show that the proposed method outperforms the state-of-the-art methods and provides high prediction accuracy (i.e., 97.70%) with low false alarm using EEG signals in different brain locations from a benchmark data set.

Keywords: Epilepsy, seizure, phase correlation, fluctuation, deviation.

Procedia PDF Downloads 455
4791 Evaluating and Examining Pictures of Children of Five Years Old

Authors: Emine Yılmaz Bolat

Abstract:

Early childhood is a very important period in terms of identifying and developing early skills and abilities. It is likely that the child's development will be in the same direction in the future. This study was conducted with 26 children for the purpose of examining pictures of children of five years old. In the survey, children were asked to draw a picture with pastel dyes. The drawings were collected and evaluated by the researcher. At the end of the research, it was found that the children used the yellow color (N = 17, 16,34%) and the least gray color (N = 1, 0,96%). When the features of children's pictures are examined, the children's paintings have been found to have hierarchy, transparency, completion, the use of vivid colors, and the presence of vertical and horizontal painting lines.

Keywords: early childhood, kindergarten, pictures of children, features of pictures

Procedia PDF Downloads 292
4790 Economic and Financial Crime, Forensic Accounting and Sustainable Developments Goals (SDGs). Bibliometric Analysis

Authors: Monica Violeta Achim, Sorin Nicolae Borlea

Abstract:

This aim of this work is to stress the needs for enhancing the role of forensic accounting in fighting economic and financial crime, in the context of the new international regulation movements in this area enhanced by the International Federation of Accountants (IFAC). Corruption, money laundering, tax evasion and other frauds significant hamper the economic growth and human development and, ultimately, the UN Sustainable Development Goals. The present paper also stresses the role of good governance in fighting the frauds, in order to achieve the most suitable sustainable development of the society. In this view, we made a bibliometric systematic review on forensic accounting and its contribution towards fraud detection and prevention and theirs relationship with good governance and Sustainable Developments Goals (SDGs). In this view, two powerful bibliometric visual software tools, VosViewer and CiteSpace are used in order to analyze published papers identifies in Scopus and Web of Science databases over the time. Our findings reveal the main red flags identified in literature as used tools by forensic accounting, the evolution in time of the interest of the topic, the distribution in space among world countries and connectivity with patterns of a good governance. Visual designs and scientific maps are useful to show these findings, in a visual way. Our findings are useful for managers and policy makers to provide important avenues that may help in reaching the 2030 Agenda for Sustainable Development, adopted by all United Nations Member States in 2015, in the area of using forensic accounting in preventing frauds.

Keywords: forensic accounting, frauds, red flags, SDGs

Procedia PDF Downloads 113