Search results for: background segmentation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4753

Search results for: background segmentation

4543 Application of the Quantile Regression Approach to the Heterogeneity of the Fine Wine Prices

Authors: Charles-Olivier Amédée-Manesme, Benoit Faye, Eric Le Fur

Abstract:

In this paper, the heterogeneity of the Bordeaux Legends 50 wine market price segment is addressed. For this purpose, quantile regression is applied – with market segmentation based on wine bottle price quantile – and the hedonic price of wine attributes is computed for various price segments of the market. The approach is applied to a major privately held data set which consists of approximately 30,000 transactions over the 2003–2014 period. The findings suggest that the relative hedonic prices of several wine attributes differ significantly among deciles. In particular, the elasticity coefficient of the expert ratings shows strong variation among prices. If - as suggested in the literature - expert ratings have a positive influence on wine price on average, they have a clearly decreasing impact over the quantiles. Finally, the lower the wine price, the higher the potential for price appreciation over time. Other variables such as chateaux or vintage are also shown to vary across the distribution of wine prices. While enhancing our understanding of the complex market dynamics that underlie Bordeaux wines’ price, this research provides empirical evidence that the QR approach adequately captures heterogeneity among wine price ranges, which simultaneously applies to wine stock, vintage and auctions’ house.

Keywords: hedonics, market segmentation, quantile regression, heterogeneity, wine economics

Procedia PDF Downloads 309
4542 Multi-scale Geographic Object-Based Image Analysis (GEOBIA) Approach to Segment a Very High Resolution Images for Extraction of New Degraded Zones. Application to The Region of Mécheria in The South-West of Algeria

Authors: Bensaid A., Mostephaoui T., Nedjai R.

Abstract:

A considerable area of Algerian lands are threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mécheriadepartment generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of PlanetScope PSB.SB sensors images by September 29, 2021. As a second step, we prospect the use of a multi-scale geographic object-based image analysis (GEOBIA) approach to segment the high spatial resolution images acquired on heterogeneous surfaces that vary according to human influence on the environment. We have used the fractal net evolution approach (FNEA) algorithm to segment images (Baatz&Schäpe, 2000). Multispectral data, a digital terrain model layer, ground truth data, a normalized difference vegetation index (NDVI) layer, and a first-order texture (entropy) layer were used to segment the multispectral images at three segmentation scales, with an emphasis on accurately delineating the boundaries and components of the sand accumulation areas (Dune, dunes fields, nebka, and barkhane). It is important to note that each auxiliary data contributed to improve the segmentation at different scales. The silted areas were classified using a nearest neighbor approach over the Naâma area using imagery. The classification of silted areas was successfully achieved over all study areas with an accuracy greater than 85%, although the results suggest that, overall, a higher degree of landscape heterogeneity may have a negative effect on segmentation and classification. Some areas suffered from the greatest over-segmentation and lowest mapping accuracy (Kappa: 0.79), which was partially attributed to confounding a greater proportion of mixed siltation classes from both sandy areas and bare ground patches. This research has demonstrated a technique based on very high-resolution images for mapping sanded and degraded areas using GEOBIA, which can be applied to the study of other lands in the steppe areas of the northern countries of the African continent.

Keywords: land development, GIS, sand dunes, segmentation, remote sensing

Procedia PDF Downloads 77
4541 Semiautomatic Calculation of Ejection Fraction Using Echocardiographic Image Processing

Authors: Diana Pombo, Maria Loaiza, Mauricio Quijano, Alberto Cadena, Juan Pablo Tello

Abstract:

In this paper, we present a semi-automatic tool for calculating ejection fraction from an echocardiographic video signal which is derived from a database in DICOM format, of Clinica de la Costa - Barranquilla. Described in this paper are each of the steps and methods used to find the respective calculation that includes acquisition and formation of the test samples, processing and finally the calculation of the parameters to obtain the ejection fraction. Two imaging segmentation methods were compared following a methodological framework that is similar only in the initial stages of processing (process of filtering and image enhancement) and differ in the end when algorithms are implemented (Active Contour and Region Growing Algorithms). The results were compared with the measurements obtained by two different medical specialists in cardiology who calculated the ejection fraction of the study samples using the traditional method, which consists of drawing the region of interest directly from the computer using echocardiography equipment and a simple equation to calculate the desired value. The results showed that if the quality of video samples are good (i.e., after the pre-processing there is evidence of an improvement in the contrast), the values provided by the tool are substantially close to those reported by physicians; also the correlation between physicians does not vary significantly.

Keywords: echocardiography, DICOM, processing, segmentation, EDV, ESV, ejection fraction

Procedia PDF Downloads 402
4540 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties

Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier

Abstract:

The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.

Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA

Procedia PDF Downloads 28
4539 Toward Indoor and Outdoor Surveillance using an Improved Fast Background Subtraction Algorithm

Authors: El Harraj Abdeslam, Raissouni Naoufal

Abstract:

The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes in variance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.

Keywords: video surveillance, background subtraction, contrast limited histogram equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes

Procedia PDF Downloads 234
4538 Computer-Aided Classification of Liver Lesions Using Contrasting Features Difference

Authors: Hussein Alahmer, Amr Ahmed

Abstract:

Liver cancer is one of the common diseases that cause the death. Early detection is important to diagnose and reduce the incidence of death. Improvements in medical imaging and image processing techniques have significantly enhanced interpretation of medical images. Computer-Aided Diagnosis (CAD) systems based on these techniques play a vital role in the early detection of liver disease and hence reduce liver cancer death rate.  This paper presents an automated CAD system consists of three stages; firstly, automatic liver segmentation and lesion’s detection. Secondly, extracting features. Finally, classifying liver lesions into benign and malignant by using the novel contrasting feature-difference approach. Several types of intensity, texture features are extracted from both; the lesion area and its surrounding normal liver tissue. The difference between the features of both areas is then used as the new lesion descriptors. Machine learning classifiers are then trained on the new descriptors to automatically classify liver lesions into benign or malignant. The experimental results show promising improvements. Moreover, the proposed approach can overcome the problems of varying ranges of intensity and textures between patients, demographics, and imaging devices and settings.

Keywords: CAD system, difference of feature, fuzzy c means, lesion detection, liver segmentation

Procedia PDF Downloads 291
4537 A Fast Parallel and Distributed Type-2 Fuzzy Algorithm Based on Cooperative Mobile Agents Model for High Performance Image Processing

Authors: Fatéma Zahra Benchara, Mohamed Youssfi, Omar Bouattane, Hassan Ouajji, Mohamed Ouadi Bensalah

Abstract:

The aim of this paper is to present a distributed implementation of the Type-2 Fuzzy algorithm in a parallel and distributed computing environment based on mobile agents. The proposed algorithm is assigned to be implemented on a SPMD (Single Program Multiple Data) architecture which is based on cooperative mobile agents as AVPE (Agent Virtual Processing Element) model in order to improve the processing resources needed for performing the big data image segmentation. In this work we focused on the application of this algorithm in order to process the big data MRI (Magnetic Resonance Images) image of size (n x m). It is encapsulated on the Mobile agent team leader in order to be split into (m x n) pixels one per AVPE. Each AVPE perform and exchange the segmentation results and maintain asynchronous communication with their team leader until the convergence of this algorithm. Some interesting experimental results are obtained in terms of accuracy and efficiency analysis of the proposed implementation, thanks to the mobile agents several interesting skills introduced in this distributed computational model.

Keywords: distributed type-2 fuzzy algorithm, image processing, mobile agents, parallel and distributed computing

Procedia PDF Downloads 389
4536 Background Check System for Turkish IT Companies

Authors: Arzu Baloglu, Ugur Kaplancali

Abstract:

This paper focuses on Background Check Systems and Pre-Employment Screening. In our study, we attempted to make an online background checking site that will help employers when hiring employees. Our site has two types of users which are free and powered user. Free users are the employees and powered users are the employers which will hire employers. The database of the site will contain all the information about the employees and employers which are registered in the system so the employers can make a search based on their searching criteria to find the suitable employee for the job. The web site also has a comments and points system. The current employer can make comments to his/her employees and can also give them points. The comments will be shown on employee’s profile, so; when an employer searches for an employee he/she can check the points and comments of the employee to see whether he or she is capable of the job or not. The employers can also follow some employees if they desire. This paper has been designed and implemented with using ASP.NET, C# and JavaScript. The outputs have a user friendly interface. The interface also aimed to provide the useful information for Turkish Technology Companies.

Keywords: background, checking, verification, human resources, online

Procedia PDF Downloads 166
4535 Video Text Information Detection and Localization in Lecture Videos Using Moments

Authors: Belkacem Soundes, Guezouli Larbi

Abstract:

This paper presents a robust and accurate method for text detection and localization over lecture videos. Frame regions are classified into text or background based on visual feature analysis. However, lecture video shows significant degradation mainly related to acquisition conditions, camera motion and environmental changes resulting in low quality videos. Hence, affecting feature extraction and description efficiency. Moreover, traditional text detection methods cannot be directly applied to lecture videos. Therefore, robust feature extraction methods dedicated to this specific video genre are required for robust and accurate text detection and extraction. Method consists of a three-step process: Slide region detection and segmentation; Feature extraction and non-text filtering. For robust and effective features extraction moment functions are used. Two distinct types of moments are used: orthogonal and non-orthogonal. For orthogonal Zernike Moments, both Pseudo Zernike moments are used, whereas for non-orthogonal ones Hu moments are used. Expressivity and description efficiency are given and discussed. Proposed approach shows that in general, orthogonal moments show high accuracy in comparison to the non-orthogonal one. Pseudo Zernike moments are more effective than Zernike with better computation time.

Keywords: text detection, text localization, lecture videos, pseudo zernike moments

Procedia PDF Downloads 122
4534 Automatic Differential Diagnosis of Melanocytic Skin Tumours Using Ultrasound and Spectrophotometric Data

Authors: Kristina Sakalauskiene, Renaldas Raisutis, Gintare Linkeviciute, Skaidra Valiukeviciene

Abstract:

Cutaneous melanoma is a melanocytic skin tumour, which has a very poor prognosis while is highly resistant to treatment and tends to metastasize. Thickness of melanoma is one of the most important biomarker for stage of disease, prognosis and surgery planning. In this study, we hypothesized that the automatic analysis of spectrophotometric images and high-frequency ultrasonic 2D data can improve differential diagnosis of cutaneous melanoma and provide additional information about tumour penetration depth. This paper presents the novel complex automatic system for non-invasive melanocytic skin tumour differential diagnosis and penetration depth evaluation. The system is composed of region of interest segmentation in spectrophotometric images and high-frequency ultrasound data, quantitative parameter evaluation, informative feature extraction and classification with linear regression classifier. The segmentation of melanocytic skin tumour region in ultrasound image is based on parametric integrated backscattering coefficient calculation. The segmentation of optical image is based on Otsu thresholding. In total 29 quantitative tissue characterization parameters were evaluated by using ultrasound data (11 acoustical, 4 shape and 15 textural parameters) and 55 quantitative features of dermatoscopic and spectrophotometric images (using total melanin, dermal melanin, blood and collagen SIAgraphs acquired using spectrophotometric imaging device SIAscope). In total 102 melanocytic skin lesions (including 43 cutaneous melanomas) were examined by using SIAscope and ultrasound system with 22 MHz center frequency single element transducer. The diagnosis and Breslow thickness (pT) of each MST were evaluated during routine histological examination after excision and used as a reference. The results of this study have shown that automatic analysis of spectrophotometric and high frequency ultrasound data can improve non-invasive classification accuracy of early-stage cutaneous melanoma and provide supplementary information about tumour penetration depth.

Keywords: cutaneous melanoma, differential diagnosis, high-frequency ultrasound, melanocytic skin tumours, spectrophotometric imaging

Procedia PDF Downloads 245
4533 Mathematics Teachers’ Background Characteristics as a Correlate of Secondary School Students’ Achievement in Mathematics in Gombe State, Nigeria

Authors: Ali Adamu

Abstract:

Teachers’ background characteristics as a correlate of students’ achievement in Mathematics were studied in Gombe State. Pearson Product Moment Correlation Coefficient was used for the analysis. Five Hundred and Twelve (512) students and 20 teachers from 12 schools in Gombe State of Nigeria were used for the study. Students’ Achievement Tests and Mathematics Teachers’ backgrounds were instruments for the study. The findings indicated that teachers’ qualifications, experience of the teacher, and teachers’ personalities had a positive correlation with students’ achievement. Recommendations are made, which include allowing the teachers to go for training as well as the government should ensure recruiting teachers that have experience in the teaching job.

Keywords: achievement-test, teachers’ personality, teaching mathematics, teacher-background

Procedia PDF Downloads 62
4532 Embedded Semantic Segmentation Network Optimized for Matrix Multiplication Accelerator

Authors: Jaeyoung Lee

Abstract:

Autonomous driving systems require high reliability to provide people with a safe and comfortable driving experience. However, despite the development of a number of vehicle sensors, it is difficult to always provide high perceived performance in driving environments that vary from time to season. The image segmentation method using deep learning, which has recently evolved rapidly, provides high recognition performance in various road environments stably. However, since the system controls a vehicle in real time, a highly complex deep learning network cannot be used due to time and memory constraints. Moreover, efficient networks are optimized for GPU environments, which degrade performance in embedded processor environments equipped simple hardware accelerators. In this paper, a semantic segmentation network, matrix multiplication accelerator network (MMANet), optimized for matrix multiplication accelerator (MMA) on Texas instrument digital signal processors (TI DSP) is proposed to improve the recognition performance of autonomous driving system. The proposed method is designed to maximize the number of layers that can be performed in a limited time to provide reliable driving environment information in real time. First, the number of channels in the activation map is fixed to fit the structure of MMA. By increasing the number of parallel branches, the lack of information caused by fixing the number of channels is resolved. Second, an efficient convolution is selected depending on the size of the activation. Since MMA is a fixed, it may be more efficient for normal convolution than depthwise separable convolution depending on memory access overhead. Thus, a convolution type is decided according to output stride to increase network depth. In addition, memory access time is minimized by processing operations only in L3 cache. Lastly, reliable contexts are extracted using the extended atrous spatial pyramid pooling (ASPP). The suggested method gets stable features from an extended path by increasing the kernel size and accessing consecutive data. In addition, it consists of two ASPPs to obtain high quality contexts using the restored shape without global average pooling paths since the layer uses MMA as a simple adder. To verify the proposed method, an experiment is conducted using perfsim, a timing simulator, and the Cityscapes validation sets. The proposed network can process an image with 640 x 480 resolution for 6.67 ms, so six cameras can be used to identify the surroundings of the vehicle as 20 frame per second (FPS). In addition, it achieves 73.1% mean intersection over union (mIoU) which is the highest recognition rate among embedded networks on the Cityscapes validation set.

Keywords: edge network, embedded network, MMA, matrix multiplication accelerator, semantic segmentation network

Procedia PDF Downloads 96
4531 The Relationship between the Speed of Light and Cosmic Background Potential

Authors: Youping Dai, Xinping Dai, Xiaoyun Li

Abstract:

In this paper, the effect of Cosmic Background Gravitational Potential (CBGP) was discussed. It is helpful to reveal the equivalence of gravitational and inertial mass, and to understand the origin of inertia. The derivation is similar to the classic approach adopted by Landau in the book 'Classical Theory of Fields'.The main differences are that we used CBGP = Lambda^2 instead of c^2, and used CBGP energy E = m*Lambda^2 instead of kinetic energy E = (1/2)m*v^2 as initial assumptions (where Lambda has the same units for measuring velocity). It showed that Lorentz transformation, rest energy and Newtonian mechanics are all affected by $CBGP$, and the square of the speed of light is equal to CBGP too. Finally, the top value of cosmic mass density and cosmic radius were discussed.

Keywords: the origin of inertia, Mach's principle, equivalence principle, cosmic background potential

Procedia PDF Downloads 350
4530 Fruit Identification System in Sweet Orange Citrus (L.) Osbeck Using Thermal Imaging and Fuzzy

Authors: Ingrid Argote, John Archila, Marcelo Becker

Abstract:

In agriculture, intelligent systems applications have generated great advances in automating some of the processes in the production chain. In order to improve the efficiency of those systems is proposed a vision system to estimate the amount of fruits in sweet orange trees. This work presents a system proposal using capture of thermal images and fuzzy logic. A bibliographical review has been done to analyze the state-of-the-art of the different systems used in fruit recognition, and also the different applications of thermography in agricultural systems. The algorithm developed for this project uses the metrics of the fuzzines parameter to the contrast improvement and segmentation of the image, for the counting algorith m was used the Hough transform. In order to validate the proposed algorithm was created a bank of images of sweet orange Citrus (L.) Osbeck acquired in the Maringá Farm. The tests with the algorithm Indicated that the variation of the tree branch temperature and the fruit is not very high, Which makes the process of image segmentation using this differentiates, This Increases the amount of false positives in the fruit counting algorithm. Recognition of fruits isolated with the proposed algorithm present an overall accuracy of 90.5 % and grouped fruits. The accuracy was 81.3 %. The experiments show the need for a more suitable hardware to have a better recognition of small temperature changes in the image.

Keywords: Agricultural systems, Citrus, Fuzzy logic, Thermal images.

Procedia PDF Downloads 208
4529 Alphabet Recognition Using Pixel Probability Distribution

Authors: Vaidehi Murarka, Sneha Mehta, Dishant Upadhyay

Abstract:

Our project topic is “Alphabet Recognition using pixel probability distribution”. The project uses techniques of Image Processing and Machine Learning in Computer Vision. Alphabet recognition is the mechanical or electronic translation of scanned images of handwritten, typewritten or printed text into machine-encoded text. It is widely used to convert books and documents into electronic files etc. Alphabet Recognition based OCR application is sometimes used in signature recognition which is used in bank and other high security buildings. One of the popular mobile applications includes reading a visiting card and directly storing it to the contacts. OCR's are known to be used in radar systems for reading speeders license plates and lots of other things. The implementation of our project has been done using Visual Studio and Open CV (Open Source Computer Vision). Our algorithm is based on Neural Networks (machine learning). The project was implemented in three modules: (1) Training: This module aims “Database Generation”. Database was generated using two methods: (a) Run-time generation included database generation at compilation time using inbuilt fonts of OpenCV library. Human intervention is not necessary for generating this database. (b) Contour–detection: ‘jpeg’ template containing different fonts of an alphabet is converted to the weighted matrix using specialized functions (contour detection and blob detection) of OpenCV. The main advantage of this type of database generation is that the algorithm becomes self-learning and the final database requires little memory to be stored (119kb precisely). (2) Preprocessing: Input image is pre-processed using image processing concepts such as adaptive thresholding, binarizing, dilating etc. and is made ready for segmentation. “Segmentation” includes extraction of lines, words, and letters from the processed text image. (3) Testing and prediction: The extracted letters are classified and predicted using the neural networks algorithm. The algorithm recognizes an alphabet based on certain mathematical parameters calculated using the database and weight matrix of the segmented image.

Keywords: contour-detection, neural networks, pre-processing, recognition coefficient, runtime-template generation, segmentation, weight matrix

Procedia PDF Downloads 357
4528 Detecting Tomato Flowers in Greenhouses Using Computer Vision

Authors: Dor Oppenheim, Yael Edan, Guy Shani

Abstract:

This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.

Keywords: agricultural engineering, image processing, computer vision, flower detection

Procedia PDF Downloads 296
4527 Video Object Segmentation for Automatic Image Annotation of Ethernet Connectors with Environment Mapping and 3D Projection

Authors: Marrone Silverio Melo Dantas Pedro Henrique Dreyer, Gabriel Fonseca Reis de Souza, Daniel Bezerra, Ricardo Souza, Silvia Lins, Judith Kelner, Djamel Fawzi Hadj Sadok

Abstract:

The creation of a dataset is time-consuming and often discourages researchers from pursuing their goals. To overcome this problem, we present and discuss two solutions adopted for the automation of this process. Both optimize valuable user time and resources and support video object segmentation with object tracking and 3D projection. In our scenario, we acquire images from a moving robotic arm and, for each approach, generate distinct annotated datasets. We evaluated the precision of the annotations by comparing these with a manually annotated dataset, as well as the efficiency in the context of detection and classification problems. For detection support, we used YOLO and obtained for the projection dataset an F1-Score, accuracy, and mAP values of 0.846, 0.924, and 0.875, respectively. Concerning the tracking dataset, we achieved an F1-Score of 0.861, an accuracy of 0.932, whereas mAP reached 0.894. In order to evaluate the quality of the annotated images used for classification problems, we employed deep learning architectures. We adopted metrics accuracy and F1-Score, for VGG, DenseNet, MobileNet, Inception, and ResNet. The VGG architecture outperformed the others for both projection and tracking datasets. It reached an accuracy and F1-score of 0.997 and 0.993, respectively. Similarly, for the tracking dataset, it achieved an accuracy of 0.991 and an F1-Score of 0.981.

Keywords: RJ45, automatic annotation, object tracking, 3D projection

Procedia PDF Downloads 133
4526 Investigation on Performance of Optical Shutter Panels for Transparent Displays

Authors: Jaehong Kim, Sunhee Park, HongSeop Shin, Kyongho Lim, Suhyun Kwon, Don-Gyou Lee, Pureum Kim, Moojong Lim, JongSang Baek

Abstract:

Transparent displays with OLEDs are the most commonly produced forms of see-through displays on the market or in development. In order to block the visual interruption caused by the light coming from the background, the special panel is combined with transparent displays with OLEDs. There is, however, few studies performance of optical shutter panel for transparent displays until now. This paper, therefore, describes the performance of optical shutter panels. The novel evaluation method was developed by measuring the amount of light which can form a transmitted background image. The new proposed method could tell how recognizable transmitted background images cannot be seen, and is consistent with viewer’s perception.

Keywords: optical shutter panel, optical performance, transparent display, visual interruption

Procedia PDF Downloads 503
4525 Robust Adaptation to Background Noise in Multichannel C-OTDR Monitoring Systems

Authors: Andrey V. Timofeev, Viktor M. Denisov

Abstract:

A robust sequential nonparametric method is proposed for adaptation to background noise parameters for real-time. The distribution of background noise was modelled like to Huber contamination mixture. The method is designed to operate as an adaptation-unit, which is included inside a detection subsystem of an integrated multichannel monitoring system. The proposed method guarantees the given size of a nonasymptotic confidence set for noise parameters. Properties of the suggested method are rigorously proved. The proposed algorithm has been successfully tested in real conditions of a functioning C-OTDR monitoring system, which was designed to monitor railways.

Keywords: guaranteed estimation, multichannel monitoring systems, non-asymptotic confidence set, contamination mixture

Procedia PDF Downloads 397
4524 Externalizing Behavior Problems Influencing Social Behavior in Early Adolescence

Authors: Zhidong Zhang, Zhi-Chao Zhang

Abstract:

This study focuses on early adolescent externalizing behavioral problems which specifically concentrate on rule breaking behavior and aggressive behavior using the instrument of Achenbach System of Empirically Based Assessment (ASEBA). The purpose was to analyze the relationships between the externalizing behavioral problems and relevant background variables such as sports activities, hobbies, chores and the number of close friends. The stratified sampling method was used to collect data from 1975 participants. The results indicated that several background variables as predictors could significantly predict rule breaking behavior and aggressive behavior. Further, a hierarchical modeling method was used to explore the causal relations among background variables, breaking behavior variables and aggressive behavior variables.

Keywords: aggressive behavior, breaking behavior, early adolescence, externalizing problem

Procedia PDF Downloads 471
4523 Investigation on Optical Performance of Operational Shutter Panels for Transparent Displays

Authors: Jaehong Kim, Sunhee Park, HongSeop Shin, Kyongho Lim, Suhyun Kwon, Don-Gyou Lee, Pureum Kim, Moojong Lim, JongSang Baek

Abstract:

Transparent displays with OLEDs are the most commonly produced forms of see-through displays on the market or in development. In order to block the visual interruption caused by the light coming from the background, the special panel is combined with transparent displays with OLEDs. There is, however, few studies optical performance of operational shutter panel for transparent displays until now. This paper, therefore, describes the optical performance of operational shutter panels. The novel evaluation method was developed by measuring the amount of light which can form a transmitted background image. The new proposed method could tell how recognize transmitted background images cannot be seen, and is consistent with viewer’s perception.

Keywords: transparent display, operational shutter panel, optical performance, OLEDs

Procedia PDF Downloads 417
4522 Tool for Maxillary Sinus Quantification in Computed Tomography Exams

Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, José Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina

Abstract:

The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.

Keywords: maxillary sinus, support vector machine, region growing, volume quantification

Procedia PDF Downloads 480
4521 The Impact of Feuerstein Enhancement of Learning Potential to the Integration of Children from Socially Disadvantaged Backgrounds into Society

Authors: Michal Kozubík, Svetlana Síthová

Abstract:

Aim: Aim of this study is to introduce the method of instrumental enrichment to people who works in the helping professions, and show further possibilities of its realization with children from socially disadvantaged backgrounds into society. Methods: We focused on Feuerstein’s Instrumental Enrichment method, its theoretical grounds and practical implementation. We carried out questionnaires and directly observed children from the disadvantaged background in Partizánske district. Results: We outlined the issues of children from disadvantaged social environment and their opportunity of social integration using the method. The findings showed the utility of Feuerstein method. Conclusions: We conclude that Feuerstein methods are very suitable for children from socially disadvantaged background and importance of social workers and special educator co-operation.

Keywords: Feuerstein, inclusion, education, socially disadvantaged background

Procedia PDF Downloads 290
4520 The Causal Relationships between Educational Environments and Rule-Breaking Behavior Issues in Early Adolescence

Authors: Zhidong Zhang, Zhi-Chao Zhang

Abstract:

This study focused on early adolescent rule-breaking behavioral problems using the instrument of Achenbach System of Empirically Based Assessment (ASEBA). The purpose was to analyze the relationships between the rule-breaking behavioral problems and relevant background variables such as sports activities, hobbies, chores and the number of close friends. The stratified sampling method was used to collect data from 2532 participants. The results indicated that several background variables as predictors could significantly predict rule breaking behavior and aggressive behavior. Further, a path analysis method was used to explore the correlational and causal relationships among background variables and breaking behavior variables.

Keywords: ASEBA, rule-breaking, path analysis, early adolescent

Procedia PDF Downloads 340
4519 Perceiving Casual Speech: A Gating Experiment with French Listeners of L2 English

Authors: Naouel Zoghlami

Abstract:

Spoken-word recognition involves the simultaneous activation of potential word candidates which compete with each other for final correct recognition. In continuous speech, the activation-competition process gets more complicated due to speech reductions existing at word boundaries. Lexical processing is more difficult in L2 than in L1 because L2 listeners often lack phonetic, lexico-semantic, syntactic, and prosodic knowledge in the target language. In this study, we investigate the on-line lexical segmentation hypotheses that French listeners of L2 English form and then revise as subsequent perceptual evidence is revealed. Our purpose is to shed further light on the processes of L2 spoken-word recognition in context and better understand L2 listening difficulties through a comparison of skilled and unskilled reactions at the point where their working hypothesis is rejected. We use a variant of the gating experiment in which subjects transcribe an English sentence presented in increments of progressively greater duration. The spoken sentence was “And this amazing athlete has just broken another world record”, chosen mainly because it included common reductions and phonetic features in English, such as elision and assimilation. Our preliminary results show that there is an important difference in the manner in which proficient and less-proficient L2 listeners handle connected speech. Less-proficient listeners delay recognition of words as they wait for lexical and syntactic evidence to appear in the gates. Further statistical results are currently being undertaken.

Keywords: gating paradigm, spoken word recognition, online lexical segmentation, L2 listening

Procedia PDF Downloads 441
4518 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation

Authors: Jonathan Gong

Abstract:

Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.

Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning

Procedia PDF Downloads 96
4517 An Investigation into Computer Vision Methods to Identify Material Other Than Grapes in Harvested Wine Grape Loads

Authors: Riaan Kleyn

Abstract:

Mass wine production companies across the globe are provided with grapes from winegrowers that predominantly utilize mechanical harvesting machines to harvest wine grapes. Mechanical harvesting accelerates the rate at which grapes are harvested, allowing grapes to be delivered faster to meet the demands of wine cellars. The disadvantage of the mechanical harvesting method is the inclusion of material-other-than-grapes (MOG) in the harvested wine grape loads arriving at the cellar which degrades the quality of wine that can be produced. Currently, wine cellars do not have a method to determine the amount of MOG present within wine grape loads. This paper seeks to find an optimal computer vision method capable of detecting the amount of MOG within a wine grape load. A MOG detection method will encourage winegrowers to deliver MOG-free wine grape loads to avoid penalties which will indirectly enhance the quality of the wine to be produced. Traditional image segmentation methods were compared to deep learning segmentation methods based on images of wine grape loads that were captured at a wine cellar. The Mask R-CNN model with a ResNet-50 convolutional neural network backbone emerged as the optimal method for this study to determine the amount of MOG in an image of a wine grape load. Furthermore, a statistical analysis was conducted to determine how the MOG on the surface of a grape load relates to the mass of MOG within the corresponding grape load.

Keywords: computer vision, wine grapes, machine learning, machine harvested grapes

Procedia PDF Downloads 58
4516 Web Page Design Optimisation Based on Segment Analytics

Authors: Varsha V. Rohini, P. R. Shreya, B. Renukadevi

Abstract:

In the web analytics the information delivery and the web usage is optimized and the analysis of data is done. The analytics is the measurement, collection and analysis of webpage data. Page statistics and user metrics are the important factor in most of the web analytics tool. This is the limitation of the existing tools. It does not provide design inputs for the optimization of information. This paper aims at providing an extension for the scope of web analytics to provide analysis and statistics of each segment of a webpage. The number of click count is calculated and the concentration of links in a web page is obtained. Its user metrics are used to help in proper design of the displayed content in a webpage by Vision Based Page Segmentation (VIPS) algorithm. When the algorithm is applied on the web page it divides the entire web page into the visual block tree. The visual block tree generated will further divide the web page into visual blocks or segments which help us to understand the usage of each segment in a page and its content. The dynamic web pages and deep web pages are used to extend the scope of web page segment analytics. Space optimization concept is used with the help of the output obtained from the Vision Based Page Segmentation (VIPS) algorithm. This technique provides us the visibility of the user interaction with the WebPages and helps us to place the important links in the appropriate segments of the webpage and effectively manage space in a page and the concentration of links.

Keywords: analytics, design optimization, visual block trees, vision based technology

Procedia PDF Downloads 240
4515 Tolerance of Ambiguity in Relation to Listening Performance across Learners of Various Linguistic Backgrounds

Authors: Amin Kaveh Boukani

Abstract:

Foreign language learning is not straightforward and can be affected by numerous factors, among which personality features like tolerance of ambiguity (TA) are so well-known and important. Such characteristics yet can be affected by other factors like learning additional languages. The current investigation, thus, opted to explore the possible effect of linguistic background (being bilingual or trilingual) on the tolerance of ambiguity (TA) of Iranian EFL learners. Furthermore, the possible mediating effect of TA on multilingual learners' language performance (listening comprehension in this study) was expounded. This research involved 68 EFL learners (32 bilinguals, 29 trilinguals) with the age range of 19-29 doing their degrees in the Department of English Language and Literature of Urmia University. A set of questionnaires, including tolerance of ambiguity (Herman et. al., 2010) and linguistic background information (Modirkhameneh, 2005), as well as the IELTS listening comprehension test, were used for data collection purposes. The results of a set of independent samples t-test and mediation analysis (Hayes, 2022) showed that (1) linguistic background (being bilingual or trilingual) had a significant direct effect on EFL learners' TA, (2) Linguistic background had a significant direct influence on listening comprehension, (3) TA had a substantial direct influence on listening comprehension, and (4) TA moderated the influence of linguistic background on listening comprehension considerably. These results suggest that multilingualism may be considered as an advantageous asset for EFL learners and should be a prioritized characteristic in EFL instruction in multilingual contexts. Further pedagogical implications and suggestions for research are proposed in light of effective EFL instruction in multilingual contexts.

Keywords: tolerance of ambiguity, listening comprehension, multilingualism, bilingual, trilingual

Procedia PDF Downloads 33
4514 Iterative Method for Lung Tumor Localization in 4D CT

Authors: Sarah K. Hagi, Majdi Alnowaimi

Abstract:

In the last decade, there were immense advancements in the medical imaging modalities. These advancements can scan a whole volume of the lung organ in high resolution images within a short time. According to this performance, the physicians can clearly identify the complicated anatomical and pathological structures of lung. Therefore, these advancements give large opportunities for more advance of all types of lung cancer treatment available and will increase the survival rate. However, lung cancer is still one of the major causes of death with around 19% of all the cancer patients. Several factors may affect survival rate. One of the serious effects is the breathing process, which can affect the accuracy of diagnosis and lung tumor treatment plan. We have therefore developed a semi automated algorithm to localize the 3D lung tumor positions across all respiratory data during respiratory motion. The algorithm can be divided into two stages. First, a lung tumor segmentation for the first phase of the 4D computed tomography (CT). Lung tumor segmentation is performed using an active contours method. Then, localize the tumor 3D position across all next phases using a 12 degrees of freedom of an affine transformation. Two data set where used in this study, a compute simulate for 4D CT using extended cardiac-torso (XCAT) phantom and 4D CT clinical data sets. The result and error calculation is presented as root mean square error (RMSE). The average error in data sets is 0.94 mm ± 0.36. Finally, evaluation and quantitative comparison of the results with a state-of-the-art registration algorithm was introduced. The results obtained from the proposed localization algorithm show a promising result to localize alung tumor in 4D CT data.

Keywords: automated algorithm , computed tomography, lung tumor, tumor localization

Procedia PDF Downloads 575