Search results for: images processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5645

Search results for: images processing

4745 Image-Based (RBG) Technique for Estimating Phosphorus Levels of Different Crops

Authors: M. M. Ali, Ahmed Al- Ani, Derek Eamus, Daniel K. Y. Tan

Abstract:

In this glasshouse study, we developed the new image-based non-destructive technique for detecting leaf P status of different crops such as cotton, tomato and lettuce. Plants were allowed to grow on nutrient media containing different P concentrations, i.e. 0%, 50% and 100% of recommended P concentration (P0 = no P, L; P1 = 2.5 mL 10 L-1 of P and P2 = 5 mL 10 L-1 of P as NaH2PO4). After 10 weeks of growth, plants were harvested and data on leaf P contents were collected using the standard destructive laboratory method and at the same time leaf images were collected by a handheld crop image sensor. We calculated leaf area, leaf perimeter and RGB (red, green and blue) values of these images. This data was further used in the linear discriminant analysis (LDA) to estimate leaf P contents, which successfully classified these plants on the basis of leaf P contents. The data indicated that P deficiency in crop plants can be predicted using the image and morphological data. Our proposed non-destructive imaging method is precise in estimating P requirements of different crop species.

Keywords: image-based techniques, leaf area, leaf P contents, linear discriminant analysis

Procedia PDF Downloads 382
4744 Influence of the Refractory Period on Neural Networks Based on the Recognition of Neural Signatures

Authors: José Luis Carrillo-Medina, Roberto Latorre

Abstract:

Experimental evidence has revealed that different living neural systems can sign their output signals with some specific neural signature. Although experimental and modeling results suggest that neural signatures can have an important role in the activity of neural networks in order to identify the source of the information or to contextualize a message, the functional meaning of these neural fingerprints is still unclear. The existence of cellular mechanisms to identify the origin of individual neural signals can be a powerful information processing strategy for the nervous system. We have recently built different models to study the ability of a neural network to process information based on the emission and recognition of specific neural fingerprints. In this paper we further analyze the features that can influence on the information processing ability of this kind of networks. In particular, we focus on the role that the duration of a refractory period in each neuron after emitting a signed message can play in the network collective dynamics.

Keywords: neural signature, neural fingerprint, processing based on signal identification, self-organizing neural network

Procedia PDF Downloads 492
4743 Assessment of Seeding and Weeding Field Robot Performance

Authors: Victor Bloch, Eerikki Kaila, Reetta Palva

Abstract:

Field robots are an important tool for enhancing efficiency and decreasing the climatic impact of food production. There exists a number of commercial field robots; however, since this technology is still new, the robot advantages and limitations, as well as methods for optimal using of robots, are still unclear. In this study, the performance of a commercial field robot for seeding and weeding was assessed. A research 2-ha sugar beet field with 0.5m row width was used for testing, which included robotic sowing of sugar beet and weeding five times during the first two months of the growing. About three and five percent of the field were used as untreated and chemically weeded control areas, respectively. The plant detection was based on the exact plant location without image processing. The robot was equipped with six seeding and weeding tools, including passive between-rows harrow hoes and active hoes cutting inside rows between the plants, and it moved with a maximal speed of 0.9 km/h. The robot's performance was assessed by image processing. The field images were collected by an action camera with a height of 2 m and a resolution 27M pixels installed on the robot and by a drone with a 16M pixel camera flying at 4 m height. To detect plants and weeds, the YOLO model was trained with transfer learning from two available datasets. A preliminary analysis of the entire field showed that in the areas treated by the robot, the weed average density varied across the field from 6.8 to 9.1 weeds/m² (compared with 0.8 in the chemically treated area and 24.3 in the untreated area), the weed average density inside rows was 2.0-2.9 weeds / m (compared with 0 on the chemically treated area), and the emergence rate was 90-95%. The information about the robot's performance has high importance for the application of robotics for field tasks. With the help of the developed method, the performance can be assessed several times during the growth according to the robotic weeding frequency. When it’s used by farmers, they can know the field condition and efficiency of the robotic treatment all over the field. Farmers and researchers could develop optimal strategies for using the robot, such as seeding and weeding timing, robot settings, and plant and field parameters and geometry. The robot producers can have quantitative information from an actual working environment and improve the robots accordingly.

Keywords: agricultural robot, field robot, plant detection, robot performance

Procedia PDF Downloads 87
4742 Automatic Processing of Trauma-Related Visual Stimuli in Female Patients Suffering From Post-Traumatic Stress Disorder after Interpersonal Traumatization

Authors: Theresa Slump, Paula Neumeister, Katharina Feldker, Carina Y. Heitmann, Thomas Straube

Abstract:

A characteristic feature of post-traumatic stress disorder (PTSD) is the automatic processing of disorder-specific stimuli that expresses itself in intrusive symptoms such as intense physical and psychological reactions to trauma-associated stimuli. That automatic processing plays an essential role in the development and maintenance of symptoms. The aim of our study was, therefore, to investigate the behavioral and neural correlates of automatic processing of trauma-related stimuli in PTSD. Although interpersonal traumatization is a form of traumatization that often occurs, it has not yet been sufficiently studied. That is why, in our study, we focused on patients suffering from interpersonal traumatization. While previous imaging studies on PTSD mainly used faces, words, or generally negative visual stimuli, our study presented complex trauma-related and neutral visual scenes. We examined 19 female subjects suffering from PTSD and examined 19 healthy women as a control group. All subjects did a geometric comparison task while lying in a functional-magnetic-resonance-imaging (fMRI) scanner. Trauma-related scenes and neutral visual scenes that were not relevant to the task were presented while the subjects were doing the task. Regarding the behavioral level, there were not any significant differences between the task performance of the two groups. Regarding the neural level, the PTSD patients showed significant hyperactivation of the hippocampus for task-irrelevant trauma-related stimuli versus neutral stimuli when compared with healthy control subjects. Connectivity analyses revealed altered connectivity between the hippocampus and other anxiety-related areas in PTSD patients, too. Overall, those findings suggest that fear-related areas are involved in PTSD patients' processing of trauma-related stimuli even if the stimuli that were used in the study were task-irrelevant.

Keywords: post-traumatic stress disorder, automatic processing, hippocampus, functional magnetic resonance imaging

Procedia PDF Downloads 199
4741 Gender Bias in Natural Language Processing: Machines Reflect Misogyny in Society

Authors: Irene Yi

Abstract:

Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.

Keywords: gendered grammar, misogynistic language, natural language processing, neural networks

Procedia PDF Downloads 120
4740 Exploring the Spatial Relationship between Built Environment and Ride-hailing Demand: Applying Street-Level Images

Authors: Jingjue Bao, Ye Li, Yujie Qi

Abstract:

The explosive growth of ride-hailing has reshaped residents' travel behavior and plays a crucial role in urban mobility within the built environment. Contributing to the research of the spatial variation of ride-hailing demand and its relationship to the built environment and socioeconomic factors, this study utilizes multi-source data from Haikou, China, to construct a Multi-scale Geographically Weighted Regression model (MGWR), considering spatial scale heterogeneity. The regression results showed that MGWR model was demonstrated superior interpretability and reliability with an improvement of 3.4% on R2 and from 4853 to 4787 on AIC, compared with Geographically Weighted Regression model (GWR). Furthermore, to precisely identify the surrounding environment of sampling point, DeepLabv3+ model is employed to segment street-level images. Features extracted from these images are incorporated as variables in the regression model, further enhancing its rationality and accuracy by 7.78% improvement on R2 compared with the MGWR model only considered region-level variables. By integrating multi-scale geospatial data and utilizing advanced computer vision techniques, this study provides a comprehensive understanding of the spatial dynamics between ride-hailing demand and the urban built environment. The insights gained from this research are expected to contribute significantly to urban transportation planning and policy making, as well as ride-hailing platforms, facilitating the development of more efficient and effective mobility solutions in modern cities.

Keywords: travel behavior, ride-hailing, spatial relationship, built environment, street-level image

Procedia PDF Downloads 81
4739 Understanding Children’s Visual Attention to Personal Protective Equipment Using Eye-Tracking

Authors: Vanessa Cho, Janet Hsiao, Nigel King, Robert Anthonappa

Abstract:

Background: The personal protective equipment (PPE) requirements for health care workers (HCWs) have changed significantly during the COVID-19 pandemic. Aim: To ascertain, using eye-tracking technology, what children notice the most when seeing HCWs in various PPE. Design: A Tobii nano pro-eye-tracking camera tracked 156 children's visual attention while they viewed photographs of HCWs in various PPEs. Eye Movement analysis with Hidden Markov Models (EMHMM) was employed to analyse 624 recordings using two approaches, namely (i) data-driven where children's fixation determined the regions of interest (ROIs), and (ii) fixed ROIs where the investigators predefined the ROIs. Results: Two significant eye movement patterns, namely distributed(85.2%) and selective(14.7%), were identified(P<0.05). Most children fixated primarily on the face regardless of the different PPEs. Children fixated equally on all PPE images in the distributed pattern, while a strong preference for unmasked faces was evident in the selective pattern (P<0.01). Conclusion: Children as young as 2.5 years used a top-down visual search behaviour and demonstrated their face processing ability. Most children did not show a strong visual preference for a specific PPE, while a minority preferred PPE with distinct facial features, namely without masks and loupes.

Keywords: COVID-19, PPE, dentistry, pediatric

Procedia PDF Downloads 90
4738 Classification of Manufacturing Data for Efficient Processing on an Edge-Cloud Network

Authors: Onyedikachi Ulelu, Andrew P. Longstaff, Simon Fletcher, Simon Parkinson

Abstract:

The widespread interest in 'Industry 4.0' or 'digital manufacturing' has led to significant research requiring the acquisition of data from sensors, instruments, and machine signals. In-depth research then identifies methods of analysis of the massive amounts of data generated before and during manufacture to solve a particular problem. The ultimate goal is for industrial Internet of Things (IIoT) data to be processed automatically to assist with either visualisation or autonomous system decision-making. However, the collection and processing of data in an industrial environment come with a cost. Little research has been undertaken on how to specify optimally what data to capture, transmit, process, and store at various levels of an edge-cloud network. The first step in this specification is to categorise IIoT data for efficient and effective use. This paper proposes the required attributes and classification to take manufacturing digital data from various sources to determine the most suitable location for data processing on the edge-cloud network. The proposed classification framework will minimise overhead in terms of network bandwidth/cost and processing time of machine tool data via efficient decision making on which dataset should be processed at the ‘edge’ and what to send to a remote server (cloud). A fast-and-frugal heuristic method is implemented for this decision-making. The framework is tested using case studies from industrial machine tools for machine productivity and maintenance.

Keywords: data classification, decision making, edge computing, industrial IoT, industry 4.0

Procedia PDF Downloads 182
4737 Integrating Natural Language Processing (NLP) and Machine Learning in Lung Cancer Diagnosis

Authors: Mehrnaz Mostafavi

Abstract:

The assessment and categorization of incidental lung nodules present a considerable challenge in healthcare, often necessitating resource-intensive multiple computed tomography (CT) scans for growth confirmation. This research addresses this issue by introducing a distinct computational approach leveraging radiomics and deep-learning methods. However, understanding local services is essential before implementing these advancements. With diverse tracking methods in place, there is a need for efficient and accurate identification approaches, especially in the context of managing lung nodules alongside pre-existing cancer scenarios. This study explores the integration of text-based algorithms in medical data curation, indicating their efficacy in conjunction with machine learning and deep-learning models for identifying lung nodules. Combining medical images with text data has demonstrated superior data retrieval compared to using each modality independently. While deep learning and text analysis show potential in detecting previously missed nodules, challenges persist, such as increased false positives. The presented research introduces a Structured-Query-Language (SQL) algorithm designed for identifying pulmonary nodules in a tertiary cancer center, externally validated at another hospital. Leveraging natural language processing (NLP) and machine learning, the algorithm categorizes lung nodule reports based on sentence features, aiming to facilitate research and assess clinical pathways. The hypothesis posits that the algorithm can accurately identify lung nodule CT scans and predict concerning nodule features using machine-learning classifiers. Through a retrospective observational study spanning a decade, CT scan reports were collected, and an algorithm was developed to extract and classify data. Results underscore the complexity of lung nodule cohorts in cancer centers, emphasizing the importance of careful evaluation before assuming a metastatic origin. The SQL and NLP algorithms demonstrated high accuracy in identifying lung nodule sentences, indicating potential for local service evaluation and research dataset creation. Machine-learning models exhibited strong accuracy in predicting concerning changes in lung nodule scan reports. While limitations include variability in disease group attribution, the potential for correlation rather than causality in clinical findings, and the need for further external validation, the algorithm's accuracy and potential to support clinical decision-making and healthcare automation represent a significant stride in lung nodule management and research.

Keywords: lung cancer diagnosis, structured-query-language (SQL), natural language processing (NLP), machine learning, CT scans

Procedia PDF Downloads 101
4736 Optimization of Laser Doping Selective Emitter for Silicon Solar Cells

Authors: Meziani Samir, Moussi Abderrahmane, Chaouchi Sofiane, Guendouzi Awatif, Djema Oussama

Abstract:

Laser doping has a large potential for integration into silicon solar cell technologies. The ability to process local, heavily diffused regions in a self-aligned manner can greatly simplify processing sequences for the fabrication of selective emitter. The choice of laser parameters for a laser doping process with 532nm is investigated. Solid state lasers with different power and speed were used for laser doping. In this work, the aim is the formation of selective emitter solar cells with a reduced number of technological steps. In order to have a highly doped localized emitter region, we used a 532 nm laser doping. Note that this region will receive the metallization of the Ag grid by screen printing. For this, we use SOLIDWORKS software to design a single type of pattern for square silicon cells. Sheet resistances, phosphorus doping concentration and silicon bulk lifetimes of irradiated samples are presented. Additionally, secondary ion mass spectroscopy (SIMS) profiles of the laser processed samples were acquired. Scanning electron microscope and optical microscope images of laser processed surfaces at different parameters are shown and compared.

Keywords: laser doping, selective emitter, silicon, solar cells

Procedia PDF Downloads 102
4735 Foggy Image Restoration Using Neural Network

Authors: Khader S. Al-Aidmat, Venus W. Samawi

Abstract:

Blurred vision in the misty atmosphere is essential problem which needs to be resolved. To solve this problem, we developed a technique to restore foggy degraded image from its original version using Back-propagation neural network (BP-NN). The suggested technique is based on mapping between foggy scene and its corresponding original scene. Seven different approaches are suggested based on type of features used in image restoration. Features are extracted from spatial and spatial-frequency domain (using DCT). Each of these approaches comes with its own BP-NN architecture depending on type and number of used features. The weight matrix resulted from training each BP-NN represents a fog filter. The performance of these filters are evaluated empirically (using PSNR), and perceptually. By comparing the performance of these filters, the effective features that suits BP-NN technique for restoring foggy images is recognized. This system proved its effectiveness and success in restoring moderate foggy images.

Keywords: artificial neural network, discrete cosine transform, feed forward neural network, foggy image restoration

Procedia PDF Downloads 382
4734 Identification of How Pre-Service Physics Teachers Understand Image Formations through Virtual Objects in the Field of Geometric Optics and Development of a New Material to Exploit Virtual Objects

Authors: Ersin Bozkurt

Abstract:

The aim of the study is to develop materials for understanding image formations through virtual objects in geometric optics. The images in physics course books are formed by using real objects. This results in mistakes in the features of images because of generalizations which leads to conceptual misunderstandings in learning. In this study it was intended to identify pre-service physics teachers misunderstandings arising from false generalizations. Focused group interview was used as a qualitative method. The findings of the study show that students have several misconceptions such as "the image in a plain mirror is always virtual". However a real image can be formed in a plain mirror. To explain a virtual object's image formation in a more understandable way an overhead projector and episcope and their design was illustrated. The illustrations are original and several computer simulations will be suggested.

Keywords: computer simulations, geometric optics, physics education, students' misconceptions in physics

Procedia PDF Downloads 404
4733 Programmable Microfluidic Device Based on Stimuli Responsive Hydrogels

Authors: Martin Elstner

Abstract:

Processing of information by means of handling chemicals is a ubiquitous phenomenon in nature. Technical implementations of chemical information processing lack of low integration densities compared to electronic devices. Stimuli responsive hydrogels are promising candidates for materials with information processing capabilities. These hydrogels are sensitive toward chemical stimuli like metal ions or amino acids. The binding of an analyte molecule induces conformational changes inside the polymer network and subsequently the water content and volume of the hydrogel varies. This volume change can control material flows, and concurrently information flows, in microfluidic devices. The combination of this technology with powerful chemical logic gates yields in a platform for highly integrated chemical circuits. The manufacturing process of such devices is very challenging and rapid prototyping is a key technology used in the study. 3D printing allows generating three-dimensional defined structures of high complexity in a single and fast process step. This thermoplastic master is molded into PDMS and the master is removed by dissolution in an organic solvent. A variety of hydrogel materials is prepared by dispenser printing of pre-polymer solutions. By a variation of functional groups or cross-linking units, the functionality of the hole circuit can be programmed. Finally, applications in the field of bio-molecular analytics were demonstrated with an autonomously operating microfluidic chip.

Keywords: bioanalytics, hydrogels, information processing, microvalve

Procedia PDF Downloads 309
4732 General Architecture for Automation of Machine Learning Practices

Authors: U. Borasi, Amit Kr. Jain, Rakesh, Piyush Jain

Abstract:

Data collection, data preparation, model training, model evaluation, and deployment are all processes in a typical machine learning workflow. Training data needs to be gathered and organised. This often entails collecting a sizable dataset and cleaning it to remove or correct any inaccurate or missing information. Preparing the data for use in the machine learning model requires pre-processing it after it has been acquired. This often entails actions like scaling or normalising the data, handling outliers, selecting appropriate features, reducing dimensionality, etc. This pre-processed data is then used to train a model on some machine learning algorithm. After the model has been trained, it needs to be assessed by determining metrics like accuracy, precision, and recall, utilising a test dataset. Every time a new model is built, both data pre-processing and model training—two crucial processes in the Machine learning (ML) workflow—must be carried out. Thus, there are various Machine Learning algorithms that can be employed for every single approach to data pre-processing, generating a large set of combinations to choose from. Example: for every method to handle missing values (dropping records, replacing with mean, etc.), for every scaling technique, and for every combination of features selected, a different algorithm can be used. As a result, in order to get the optimum outcomes, these tasks are frequently repeated in different combinations. This paper suggests a simple architecture for organizing this largely produced “combination set of pre-processing steps and algorithms” into an automated workflow which simplifies the task of carrying out all possibilities.

Keywords: machine learning, automation, AUTOML, architecture, operator pool, configuration, scheduler

Procedia PDF Downloads 58
4731 Hand Symbol Recognition Using Canny Edge Algorithm and Convolutional Neural Network

Authors: Harshit Mittal, Neeraj Garg

Abstract:

Hand symbol recognition is a pivotal component in the domain of computer vision, with far-reaching applications spanning sign language interpretation, human-computer interaction, and accessibility. This research paper discusses the approach with the integration of the Canny Edge algorithm and convolutional neural network. The significance of this study lies in its potential to enhance communication and accessibility for individuals with hearing impairments or those engaged in gesture-based interactions with technology. In the experiment mentioned, the data is manually collected by the authors from the webcam using Python codes, to increase the dataset augmentation, is applied to original images, which makes the model more compatible and advanced. Further, the dataset of about 6000 coloured images distributed equally in 5 classes (i.e., 1, 2, 3, 4, 5) are pre-processed first to gray images and then by the Canny Edge algorithm with threshold 1 and 2 as 150 each. After successful data building, this data is trained on the Convolutional Neural Network model, giving accuracy: 0.97834, precision: 0.97841, recall: 0.9783, and F1 score: 0.97832. For user purposes, a block of codes is built in Python to enable a window for hand symbol recognition. This research, at its core, seeks to advance the field of computer vision by providing an advanced perspective on hand sign recognition. By leveraging the capabilities of the Canny Edge algorithm and convolutional neural network, this study contributes to the ongoing efforts to create more accurate, efficient, and accessible solutions for individuals with diverse communication needs.

Keywords: hand symbol recognition, computer vision, Canny edge algorithm, convolutional neural network

Procedia PDF Downloads 65
4730 Coordination Polymer Hydrogels Based on Coinage Metals and Nucleobase Derivatives

Authors: Lamia L. G. Al-Mahamad, Benjamin R. Horrocks, Andrew Houlton

Abstract:

Hydrogels based on metal coordination polymers of nucleosides and a range of metal ions (Au, Ag, Cu) have been prepared and characterized by atomic force microscopy (AFM), transmission electron microscopy (TEM), X-ray photoelectron spectroscopy, Fourier transform infrared spectroscopy, ultraviolet-visible absorption spectroscopy, and powder X-ray diffraction. AFM images of the xerogels revealed the formation of extremely long polymer molecules (> 10 micrometers, the maximum scan range). This result is also consistent with TEM images which show a fibrous morphology. Oxidative doping of the Au-nucleoside fibres produces an electrically conductive nanowire. No sharp Bragg peaks were found at the at the X-ray diffraction pattern for metal ions hydrogels indicating that the samples were amorphous, but instead the data showed broad peaks in the range 20 < Q < 40 and correspond to distances d=2μ/Q. The data was analysed using a simplified Rietveld method by fitting a regression model to obtain the distance between atoms.

Keywords: hydrogel, metal ions, nanowire, nucleoside

Procedia PDF Downloads 265
4729 Computer-Aided Detection of Liver and Spleen from CT Scans using Watershed Algorithm

Authors: Belgherbi Aicha, Bessaid Abdelhafid

Abstract:

In the recent years a great deal of research work has been devoted to the development of semi-automatic and automatic techniques for the analysis of abdominal CT images. The first and fundamental step in all these studies is the semi-automatic liver and spleen segmentation that is still an open problem. In this paper, a semi-automatic liver and spleen segmentation method by the mathematical morphology based on watershed algorithm has been proposed. Our algorithm is currency in two parts. In the first, we seek to determine the region of interest by applying the morphological to extract the liver and spleen. The second step consists to improve the quality of the image gradient. In this step, we propose a method for improving the image gradient to reduce the over-segmentation problem by applying the spatial filters followed by the morphological filters. Thereafter we proceed to the segmentation of the liver, spleen. The aim of this work is to develop a method for semi-automatic segmentation liver and spleen based on watershed algorithm, improve the accuracy and the robustness of the liver and spleen segmentation and evaluate a new semi-automatic approach with the manual for liver segmentation. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work. The system has been evaluated by computing the sensitivity and specificity between the semi-automatically segmented (liver and spleen) contour and the manually contour traced by radiological experts. Liver segmentation has achieved the sensitivity and specificity; sens Liver=96% and specif Liver=99% respectively. Spleen segmentation achieves similar, promising results sens Spleen=95% and specif Spleen=99%.

Keywords: CT images, liver and spleen segmentation, anisotropic diffusion filter, morphological filters, watershed algorithm

Procedia PDF Downloads 325
4728 Secret Sharing in Visual Cryptography Using NVSS and Data Hiding Techniques

Authors: Misha Alexander, S. B. Waykar

Abstract:

Visual Cryptography is a special unbreakable encryption technique that transforms the secret image into random noisy pixels. These shares are transmitted over the network and because of its noisy texture it attracts the hackers. To address this issue a Natural Visual Secret Sharing Scheme (NVSS) was introduced that uses natural shares either in digital or printed form to generate the noisy secret share. This scheme greatly reduces the transmission risk but causes distortion in the retrieved secret image through variation in settings and properties of digital devices used to capture the natural image during encryption / decryption phase. This paper proposes a new NVSS scheme that extracts the secret key from randomly selected unaltered multiple natural images. To further improve the security of the shares data hiding techniques such as Steganography and Alpha channel watermarking are proposed.

Keywords: decryption, encryption, natural visual secret sharing, natural images, noisy share, pixel swapping

Procedia PDF Downloads 404
4727 Deep Learning Strategies for Mapping Complex Vegetation Patterns in Mediterranean Environments Undergoing Climate Change

Authors: Matan Cohen, Maxim Shoshany

Abstract:

Climatic, topographic and geological diversity, together with frequent disturbance and recovery cycles, produce highly complex spatial patterns of trees, shrubs, dwarf shrubs and bare ground patches. Assessment of spatial and temporal variations of these life-forms patterns under climate change is of high ecological priority. Here we report on one of the first attempts to discriminate between images of three Mediterranean life-forms patterns at three densities. The development of an extensive database of orthophoto images representing these 9 pattern categories was instrumental for training and testing pre-trained and newly-trained DL models utilizing DenseNet architecture. Both models demonstrated the advantages of using Deep Learning approaches over existing spectral and spatial (pattern or texture) algorithmic methods in differentiation 9 life-form spatial mixtures categories.

Keywords: texture classification, deep learning, desert fringe ecosystems, climate change

Procedia PDF Downloads 88
4726 Estimating PM2.5 Concentrations Based on Landsat 8 Imagery and Historical Field Data over the Metropolitan Area of Mexico City

Authors: Rodrigo T. Sepulveda-Hirose, Ana B. Carrera-Aguilar, Francisco Andree Ramirez-Casas, Alondra Orozco-Gomez, Miguel Angel Sanchez-Caro, Carlos Herrera-Ventosa

Abstract:

High concentrations of particulate matter in the atmosphere pose a threat to human health, especially over areas with high concentrations of population; however, field air pollution monitoring is expensive and time-consuming. In order to achieve reduced costs and global coverage of the whole urban area, remote sensing can be used. This study evaluates PM2.5 concentrations, over the Mexico City´s metropolitan area, are estimated using atmospheric reflectance from LANDSAT 8, satellite imagery and historical PM2.5 measurements of the Automatic Environmental Monitoring Network of Mexico City (RAMA). Through the processing of the available satellite images, a preliminary model was generated to evaluate the optimal bands for the generation of the final model for Mexico City. Work on the final model continues with the results of the preliminary model. It was found that infrared bands have helped to model in other cities, but the effectiveness that these bands could provide for the geographic and climatic conditions of Mexico City is still being evaluated.

Keywords: air pollution modeling, Landsat 8, PM2.5, remote sensing

Procedia PDF Downloads 196
4725 The Output Fallacy: An Investigation into Input, Noticing, and Learners’ Mechanisms

Authors: Samantha Rix

Abstract:

The purpose of this research paper is to investigate the cognitive processing of learners who receive input but produce very little or no output, and who, when they do produce output, exhibit a similar language proficiency as do those learners who produced output more regularly in the language classroom. Previous studies have investigated the benefits of output (with somewhat differing results); therefore, the presentation will begin with an investigation of what may underlie gains in proficiency without output. Consequently, a pilot study was designed and conducted to gain insight into the cognitive processing of low-output language learners looking, for example, at quantity and quality of noticing. This will be carried out within the paradigm of action classroom research, observing and interviewing low-output language learners in an intensive English program at a small Midwest university. The results of the pilot study indicated that autonomy in language learning, specifically utilizing strategies such self-monitoring, self-talk, and thinking 'out-loud', were crucial in the development of language proficiency for academic-level performance. The presentation concludes with an examination of pedagogical implication for classroom use in order to aide students in their language development.

Keywords: cognitive processing, language learners, language proficiency, learning strategies

Procedia PDF Downloads 475
4724 Radiation Usage Impact of on Anti-Nutritional Compounds (Antitrypsin and Phytic Acid) of Livestock and Poultry Foods

Authors: Mohammad Khosravi, Ali Kiani, Behroz Dastar, Parvin Showrang

Abstract:

Review was carried out on important anti-nutritional compounds of livestock and poultry foods and the effect of radiation usage. Nowadays, with advancement in technology, different methods have been considered for the optimum usage of nutrients in livestock and poultry foods. Steaming, extruding, pelleting, and the use of chemicals are the most common and popular methods in food processing. Use of radiation in food processing researches in the livestock and poultry industry is currently highly regarded. Ionizing (electrons, gamma) and non-ionizing beams (microwave and infrared) are the most useable rays in animal food processing. In recent researches, these beams have been used to remove and reduce the anti-nutritional factors and microbial contamination and improve the digestibility of nutrients in poultry and livestock food. The evidence presented will help researchers to recognize techniques of relevance to them. Simplification of some of these techniques, especially in developing countries, must be addressed so that they can be used more widely.

Keywords: antitrypsin, gamma anti-nutritional components, phytic acid, radiation

Procedia PDF Downloads 343
4723 Extraction of Urban Building Damage Using Spectral, Height and Corner Information

Authors: X. Wang

Abstract:

Timely and accurate information on urban building damage caused by earthquake is important basis for disaster assessment and emergency relief. Very high resolution (VHR) remotely sensed imagery containing abundant fine-scale information offers a large quantity of data for detecting and assessing urban building damage in the aftermath of earthquake disasters. However, the accuracy obtained using spectral features alone is comparatively low, since building damage, intact buildings and pavements are spectrally similar. Therefore, it is of great significance to detect urban building damage effectively using multi-source data. Considering that in general height or geometric structure of buildings change dramatically in the devastated areas, a novel multi-stage urban building damage detection method, using bi-temporal spectral, height and corner information, was proposed in this study. The pre-event height information was generated using stereo VHR images acquired from two different satellites, while the post-event height information was produced from airborne LiDAR data. The corner information was extracted from pre- and post-event panchromatic images. The proposed method can be summarized as follows. To reduce the classification errors caused by spectral similarity and errors in extracting height information, ground surface, shadows, and vegetation were first extracted using the post-event VHR image and height data and were masked out. Two different types of building damage were then extracted from the remaining areas: the height difference between pre- and post-event was used for detecting building damage showing significant height change; the difference in the density of corners between pre- and post-event was used for extracting building damage showing drastic change in geometric structure. The initial building damage result was generated by combining above two building damage results. Finally, a post-processing procedure was adopted to refine the obtained initial result. The proposed method was quantitatively evaluated and compared to two existing methods in Port au Prince, Haiti, which was heavily hit by an earthquake in January 2010, using pre-event GeoEye-1 image, pre-event WorldView-2 image, post-event QuickBird image and post-event LiDAR data. The results showed that the method proposed in this study significantly outperformed the two comparative methods in terms of urban building damage extraction accuracy. The proposed method provides a fast and reliable method to detect urban building collapse, which is also applicable to relevant applications.

Keywords: building damage, corner, earthquake, height, very high resolution (VHR)

Procedia PDF Downloads 213
4722 Analysis of Enhanced Built-up and Bare Land Index in the Urban Area of Yangon, Myanmar

Authors: Su Nandar Tin, Wutjanun Muttitanon

Abstract:

The availability of free global and historical satellite imagery provides a valuable opportunity for mapping and monitoring the year by year for the built-up area, constantly and effectively. Land distribution guidelines and identification of changes are important in preparing and reviewing changes in the ground overview data. This study utilizes Landsat images for thirty years of information to acquire significant, and land spread data that are extremely valuable for urban arranging. This paper is mainly introducing to focus the basic of extracting built-up area for the city development area from the satellite images of LANDSAT 5,7,8 and Sentinel 2A from USGS in every five years. The purpose analyses the changing of the urban built-up area according to the year by year and to get the accuracy of mapping built-up and bare land areas in studying the trend of urban built-up changes the periods from 1990 to 2020. The GIS tools such as raster calculator and built-up area modelling are using in this study and then calculating the indices, which include enhanced built-up and bareness index (EBBI), Normalized difference Built-up index (NDBI), Urban index (UI), Built-up index (BUI) and Normalized difference bareness index (NDBAI) are used to get the high accuracy urban built-up area. Therefore, this study will point out a variable approach to automatically mapping typical enhanced built-up and bare land changes (EBBI) with simple indices and according to the outputs of indexes. Therefore, the percentage of the outputs of enhanced built-up and bareness index (EBBI) of the sentinel-2A can be realized with 48.4% of accuracy than the other index of Landsat images which are 15.6% in 1990 where there is increasing urban expansion area from 43.6% in 1990 to 92.5% in 2020 on the study area for last thirty years.

Keywords: built-up area, EBBI, NDBI, NDBAI, urban index

Procedia PDF Downloads 173
4721 Analyzing the Risk Based Approach in General Data Protection Regulation: Basic Challenges Connected with Adapting the Regulation

Authors: Natalia Kalinowska

Abstract:

The adoption of the General Data Protection Regulation, (GDPR) finished the four-year work of the European Commission in this area in the European Union. Considering far-reaching changes, which will be applied by GDPR, the European legislator envisaged two-year transitional period. Member states and companies have to prepare for a new regulation until 25 of May 2018. The idea, which becomes a new look at an attitude to data protection in the European Union is risk-based approach. So far, as a result of implementation of Directive 95/46/WE, in many European countries (including Poland) there have been adopted very particular regulations, specifying technical and organisational security measures e.g. Polish implementing rules indicate even how long password should be. According to the new approach from May 2018, controllers and processors will be obliged to apply security measures adequate to level of risk associated with specific data processing. The risk in GDPR should be interpreted as the likelihood of a breach of the rights and freedoms of the data subject. According to Recital 76, the likelihood and severity of the risk to the rights and freedoms of the data subject should be determined by reference to the nature, scope, context and purposes of the processing. GDPR does not indicate security measures which should be applied – in recitals there are only examples such as anonymization or encryption. It depends on a controller’s decision what type of security measures controller considered as sufficient and he will be responsible if these measures are not sufficient or if his identification of risk level is incorrect. Data protection regulation indicates few levels of risk. Recital 76 indicates risk and high risk, but some lawyers think, that there is one more category – low risk/now risk. Low risk/now risk data processing is a situation when it is unlikely to result in a risk to the rights and freedoms of natural persons. GDPR mentions types of data processing when a controller does not have to evaluate level of risk because it has been classified as „high risk” processing e.g. processing on a large scale of special categories of data, processing with using new technologies. The methodology will include analysis of legal regulations e.g. GDPR, the Polish Act on the Protection of personal data. Moreover: ICO Guidelines and articles concerning risk based approach in GDPR. The main conclusion is that an appropriate risk assessment is a key to keeping data safe and avoiding financial penalties. On the one hand, this approach seems to be more equitable, not only for controllers or processors but also for data subjects, but on the other hand, it increases controllers’ uncertainties in the assessment which could have a direct impact on incorrect data protection and potential responsibility for infringement of regulation.

Keywords: general data protection regulation, personal data protection, privacy protection, risk based approach

Procedia PDF Downloads 252
4720 Python Implementation for S1000D Applicability Depended Processing Model - SALERNO

Authors: Theresia El Khoury, Georges Badr, Amir Hajjam El Hassani, Stéphane N’Guyen Van Ky

Abstract:

The widespread adoption of machine learning and artificial intelligence across different domains can be attributed to the digitization of data over several decades, resulting in vast amounts of data, types, and structures. Thus, data processing and preparation turn out to be a crucial stage. However, applying these techniques to S1000D standard-based data poses a challenge due to its complexity and the need to preserve logical information. This paper describes SALERNO, an S1000d AppLicability dEpended pRocessiNg mOdel. This python-based model analyzes and converts the XML S1000D-based files into an easier data format that can be used in machine learning techniques while preserving the different logic and relationships in files. The model parses the files in the given folder, filters them, and extracts the required information to be saved in appropriate data frames and Excel sheets. Its main idea is to group the extracted information by applicability. In addition, it extracts the full text by replacing internal and external references while maintaining the relationships between files, as well as the necessary requirements. The resulting files can then be saved in databases and used in different models. Documents in both English and French languages were tested, and special characters were decoded. Updates on the technical manuals were taken into consideration as well. The model was tested on different versions of the S1000D, and the results demonstrated its ability to effectively handle the applicability, requirements, references, and relationships across all files and on different levels.

Keywords: aeronautics, big data, data processing, machine learning, S1000D

Procedia PDF Downloads 157
4719 Pre-Analysis of Printed Circuit Boards Based on Multispectral Imaging for Vision Based Recognition of Electronics Waste

Authors: Florian Kleber, Martin Kampel

Abstract:

The increasing demand of gallium, indium and rare-earth elements for the production of electronics, e.g. solid state-lighting, photovoltaics, integrated circuits, and liquid crystal displays, will exceed the world-wide supply according to current forecasts. Recycling systems to reclaim these materials are not yet in place, which challenges the sustainability of these technologies. This paper proposes a multispectral imaging system as a basis for a vision based recognition system for valuable components of electronics waste. Multispectral images intend to enhance the contrast of images of printed circuit boards (single components, as well as labels) for further analysis, such as optical character recognition and entire printed circuit board recognition. The results show that a higher contrast is achieved in the near infrared compared to ultraviolet and visible light.

Keywords: electronics waste, multispectral imaging, printed circuit boards, rare-earth elements

Procedia PDF Downloads 415
4718 Wasteless Solid-Phase Method for Conversion of Iron Ores Contaminated with Silicon and Phosphorus Compounds

Authors: А. V. Panko, Е. V. Ablets, I. G. Kovzun, М. А. Ilyashov

Abstract:

Based upon generalized analysis of modern know-how in the sphere of processing, concentration and purification of iron-ore raw materials (IORM), in particular, the most widespread ferrioxide-silicate materials (FOSM), containing impurities of phosphorus and other elements compounds, noted special role of nano technological initiatives in improvement of such processes. Considered ideas of role of nano particles in processes of FOSM carbonization with subsequent direct reduction of ferric oxides contained in them to metal phase, as well as in processes of alkali treatment and separation of powered iron from phosphorus compounds. Using the obtained results the wasteless solid-phase processing, concentration and purification of IORM and FOSM from compounds of phosphorus, silicon and other impurities excelling known methods of direct iron reduction from iron ores and metallurgical slimes.

Keywords: iron ores, solid-phase reduction, nanoparticles in reduction and purification of iron from silicon and phosphorus, wasteless method of ores processing

Procedia PDF Downloads 488
4717 Genomic Sequence Representation Learning: An Analysis of K-Mer Vector Embedding Dimensionality

Authors: James Jr. Mashiyane, Risuna Nkolele, Stephanie J. Müller, Gciniwe S. Dlamini, Rebone L. Meraba, Darlington S. Mapiye

Abstract:

When performing language tasks in natural language processing (NLP), the dimensionality of word embeddings is chosen either ad-hoc or is calculated by optimizing the Pairwise Inner Product (PIP) loss. The PIP loss is a metric that measures the dissimilarity between word embeddings, and it is obtained through matrix perturbation theory by utilizing the unitary invariance of word embeddings. Unlike in natural language, in genomics, especially in genome sequence processing, unlike in natural language processing, there is no notion of a “word,” but rather, there are sequence substrings of length k called k-mers. K-mers sizes matter, and they vary depending on the goal of the task at hand. The dimensionality of word embeddings in NLP has been studied using the matrix perturbation theory and the PIP loss. In this paper, the sufficiency and reliability of applying word-embedding algorithms to various genomic sequence datasets are investigated to understand the relationship between the k-mer size and their embedding dimension. This is completed by studying the scaling capability of three embedding algorithms, namely Latent Semantic analysis (LSA), Word2Vec, and Global Vectors (GloVe), with respect to the k-mer size. Utilising the PIP loss as a metric to train embeddings on different datasets, we also show that Word2Vec outperforms LSA and GloVe in accurate computing embeddings as both the k-mer size and vocabulary increase. Finally, the shortcomings of natural language processing embedding algorithms in performing genomic tasks are discussed.

Keywords: word embeddings, k-mer embedding, dimensionality reduction

Procedia PDF Downloads 138
4716 Graph Codes - 2D Projections of Multimedia Feature Graphs for Fast and Effective Retrieval

Authors: Stefan Wagenpfeil, Felix Engel, Paul McKevitt, Matthias Hemmje

Abstract:

Multimedia Indexing and Retrieval is generally designed and implemented by employing feature graphs. These graphs typically contain a significant number of nodes and edges to reflect the level of detail in feature detection. A higher level of detail increases the effectiveness of the results but also leads to more complex graph structures. However, graph-traversal-based algorithms for similarity are quite inefficient and computation intensive, especially for large data structures. To deliver fast and effective retrieval, an efficient similarity algorithm, particularly for large graphs, is mandatory. Hence, in this paper, we define a graph-projection into a 2D space (Graph Code) as well as the corresponding algorithms for indexing and retrieval. We show that calculations in this space can be performed more efficiently than graph-traversals due to a simpler processing model and a high level of parallelization. In consequence, we prove that the effectiveness of retrieval also increases substantially, as Graph Codes facilitate more levels of detail in feature fusion. Thus, Graph Codes provide a significant increase in efficiency and effectiveness (especially for Multimedia indexing and retrieval) and can be applied to images, videos, audio, and text information.

Keywords: indexing, retrieval, multimedia, graph algorithm, graph code

Procedia PDF Downloads 161