Search results for: image and telemetric data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 27077

Search results for: image and telemetric data

25967 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms

Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier

Abstract:

Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.

Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability

Procedia PDF Downloads 109
25966 Language Effects on the Prestige and Product Image of Advertised Smartphone in Consumer Purchases in Indonesia

Authors: Vidyarini Dwita, Rebecca Fanany

Abstract:

This study will discuss the growth of the market for smartphone technology in Indonesia. This country, with the world’s fourth largest population, has a reputation as the social media capital of the world, and this reputation is largely justified. The penetration of social media is high in Indonesia which has one of the largest global markets. Most Indonesian users of Facebook, Twitter and other social media platforms access the sites from their mobile phones. Indonesia is expected to continue to be a major market for digital mobile devices, such as smartphone and tablets that can access the internet. The aim of this study to describe the way responses of Indonesian consumers to smartphone advertising using English and Indonesian will impact on their perceptions of the prestige and product image of the advertised items and thus influence consumer intention to purchase the item. Advertising for smartphones and similar products is intense and dynamic and often draws on the social attitudes of Indonesians with respect to linguistic and cultural content and especially appeals to their desire to be part of global mainstream culture. The study uses a qualitative method based on in-depth interviews with 30 participants. Content analysis is employed to analyse the responses of Indonesian consumers to smartphone advertising that uses English and Indonesian text. Its findings indicate that consumers’ impressions of English and Indonesian slogans influence their attitudes toward smartphones, suggesting that linguistic context plays a role in influencing consumer purchases.

Keywords: consumer purchases, marketing communication, product image, smartphone advertising, sociolinguistic

Procedia PDF Downloads 225
25965 Hybrid Approach for Face Recognition Combining Gabor Wavelet and Linear Discriminant Analysis

Authors: A: Annis Fathima, V. Vaidehi, S. Ajitha

Abstract:

Face recognition system finds many applications in surveillance and human computer interaction systems. As the applications using face recognition systems are of much importance and demand more accuracy, more robustness in the face recognition system is expected with less computation time. In this paper, a hybrid approach for face recognition combining Gabor Wavelet and Linear Discriminant Analysis (HGWLDA) is proposed. The normalized input grayscale image is approximated and reduced in dimension to lower the processing overhead for Gabor filters. This image is convolved with bank of Gabor filters with varying scales and orientations. LDA, a subspace analysis techniques are used to reduce the intra-class space and maximize the inter-class space. The techniques used are 2-dimensional Linear Discriminant Analysis (2D-LDA), 2-dimensional bidirectional LDA ((2D)2LDA), Weighted 2-dimensional bidirectional Linear Discriminant Analysis (Wt (2D)2 LDA). LDA reduces the feature dimension by extracting the features with greater variance. k-Nearest Neighbour (k-NN) classifier is used to classify and recognize the test image by comparing its feature with each of the training set features. The HGWLDA approach is robust against illumination conditions as the Gabor features are illumination invariant. This approach also aims at a better recognition rate using less number of features for varying expressions. The performance of the proposed HGWLDA approaches is evaluated using AT&T database, MIT-India face database and faces94 database. It is found that the proposed HGWLDA approach provides better results than the existing Gabor approach.

Keywords: face recognition, Gabor wavelet, LDA, k-NN classifier

Procedia PDF Downloads 469
25964 Dirty Martini vs Martini: The Contrasting Duality Between Big Bang and BTS Public Image and Their Latest MVs Analysis

Authors: Patricia Portugal Marques de Carvalho Lourenco

Abstract:

Big Bang is like a dirty martini embroiled in a stew of personal individual scandals that have rocked the group’s image and perception, from G-Dragon’s and T.O.P. marijuana episodes in 2011 and 2016, respectively, to Daesung’s building illicit entertainment activities in 2018to the Burning Sun shebang that led to the Titanic sink of Big Bang’s youngest member Seungri in 2019 and the positive sentiment migration to the antithetical side. BTS, on the other hand, are like a martini, clear, clean, attracting as many crowds to their performances and online content as the Pope attracts believers to Sunday Mass in the Vatican, as exemplified by their latest MVs. Big Bang’s 2022 Still Life achieved 16.4 million views on Youtube in 24hours, whilst BTS Permission to Dance achieved 68.5 million in the same period of time. The difference is significant when added Big Bang’s and BTS overall award wins, a total of 117 in contrast to 460. Both groups are uniquely talented and exceptional performers that have been contributing greatly to the dissemination of Korean Pop Music on a global scale in their own inimitable ways. Both are exceptional in their own right and while the artists cannot, ought not, should not be compared for the grave injustice made in comparing one individual planet with one solar system, a contrast is merited and hence done. The reality, nonetheless, is about disengagement from a group that lives life humanly, learning and evolving with each challenge and mistake without a clean, perfect tag attached to it, demonstrating not only an inability to disassociate the person from the artist and the music but also an inability to understand the difference between a private and public life.

Keywords: K-Pop, big bang, BTS, music, public image, entertainment, korean entertainment

Procedia PDF Downloads 101
25963 A Particle Image Velocimetric (PIV) Experiment on Simplified Bottom Hole Flow Field

Authors: Heqian Zhao, Huaizhong Shi, Zhongwei Huang, Zhengliang Chen, Ziang Gu, Fei Gao

Abstract:

Hydraulics mechanics is significantly important in the drilling process of oil or gas exploration, especially for the drill bit. The fluid flows through the nozzles on the bit and generates a water jet to remove the cutting at the bottom hole. In this paper, a simplified bottom hole model is established. The Particle Image Velocimetric (PIV) is used to capture the flow field of the single nozzle. Due to the limitation of the bottom and wellbore, the potential core is shorter than that of the free water jet. The velocity magnitude rapidly attenuates when fluid close to the bottom is lower than about 5 mm. Besides, a vortex zone appears near the middle of the bottom beside the water jet zone. A modified exponential function can be used to fit the centerline velocity well. On the one hand, the results of this paper can provide verification for the numerical simulation of the bottom hole flow field. On the other hand, it also can provide an experimental basis for the hydraulic design of the drill bit.

Keywords: oil and gas, hydraulic mechanic of drilling, PIV, bottom hole

Procedia PDF Downloads 216
25962 Markov Random Field-Based Segmentation Algorithm for Detection of Land Cover Changes Using Uninhabited Aerial Vehicle Synthetic Aperture Radar Polarimetric Images

Authors: Mehrnoosh Omati, Mahmod Reza Sahebi

Abstract:

The information on land use/land cover changing plays an essential role for environmental assessment, planning and management in regional development. Remotely sensed imagery is widely used for providing information in many change detection applications. Polarimetric Synthetic aperture radar (PolSAR) image, with the discrimination capability between different scattering mechanisms, is a powerful tool for environmental monitoring applications. This paper proposes a new boundary-based segmentation algorithm as a fundamental step for land cover change detection. In this method, first, two PolSAR images are segmented using integration of marker-controlled watershed algorithm and coupled Markov random field (MRF). Then, object-based classification is performed to determine changed/no changed image objects. Compared with pixel-based support vector machine (SVM) classifier, this novel segmentation algorithm significantly reduces the speckle effect in PolSAR images and improves the accuracy of binary classification in object-based level. The experimental results on Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) polarimetric images show a 3% and 6% improvement in overall accuracy and kappa coefficient, respectively. Also, the proposed method can correctly distinguish homogeneous image parcels.

Keywords: coupled Markov random field (MRF), environment, object-based analysis, polarimetric SAR (PolSAR) images

Procedia PDF Downloads 219
25961 River Network Delineation from Sentinel 1 Synthetic Aperture Radar Data

Authors: Christopher B. Obida, George A. Blackburn, James D. Whyatt, Kirk T. Semple

Abstract:

In many regions of the world, especially in developing countries, river network data are outdated or completely absent, yet such information is critical for supporting important functions such as flood mitigation efforts, land use and transportation planning, and the management of water resources. In this study, a method was developed for delineating river networks using Sentinel 1 imagery. Unsupervised classification was applied to multi-temporal Sentinel 1 data to discriminate water bodies from other land covers then the outputs were combined to generate a single persistent water bodies product. A thinning algorithm was then used to delineate river centre lines, which were converted into vector features and built into a topologically structured geometric network. The complex river system of the Niger Delta was used to compare the performance of the Sentinel-based method against alternative freely available water body products from United States Geological Survey, European Space Agency and OpenStreetMap and a river network derived from a Shuttle Rader Topography Mission Digital Elevation Model. From both raster-based and vector-based accuracy assessments, it was found that the Sentinel-based river network products were superior to the comparator data sets by a substantial margin. The geometric river network that was constructed permitted a flow routing analysis which is important for a variety of environmental management and planning applications. The extracted network will potentially be applied for modelling dispersion of hydrocarbon pollutants in Ogoniland, a part of the Niger Delta. The approach developed in this study holds considerable potential for generating up to date, detailed river network data for the many countries where such data are deficient.

Keywords: Sentinel 1, image processing, river delineation, large scale mapping, data comparison, geometric network

Procedia PDF Downloads 141
25960 Typology of Gaming Tourists Based on the Perception of Destination Image

Authors: Mi Ju Choi

Abstract:

This study investigated the perception of gaming tourists toward Macau and developed a typology of gaming tourists. The 1,497 responses from tourists in Macau were collected through convenience sampling method. The dimensions of multi-culture, convenience, economy, gaming, and unsafety, were subsequently extracted as the factors of perception of gaming tourists in Macau. Cluster analysis was performed using the delineated factors (perception of tourists on Macau). Four heterogonous groups were generated, namely, gaming lovers (n = 467, 31.2%), exotic lovers (n = 509, 34.0%), reasonable budget seekers (n = 269, 18.0%), and convenience seekers (n = 252, 16.8%). Further analysis was performed to investigate any difference in gaming behavior and tourist activities. The findings are expected to contribute to the efforts of destination marketing organizations (DMOs) in establishing effective business strategies, provide a profile of gaming tourists in certain market segments, and assist DMOs and casino managers in establishing more effective marketing strategies for target markets.

Keywords: destination image, gaming tourists, Macau, segmentation

Procedia PDF Downloads 303
25959 Reproduction of New Media Art Village around NTUT: Heterotopia of Visual Culture Art Education

Authors: Yu Cheng-Yu

Abstract:

‘Heterotopia’, ‘Visual Cultural Art Education’ and ‘New Media’ of these three subjects seemingly are irrelevant. In fact, there are synchronicity and intertextuality inside. In addition to visual culture, art education inspires students the ability to reflect on popular culture image through visual culture teaching strategies in school. We should get involved in the community to construct the learning environment that conveys visual culture art. This thesis attempts to probe the heterogeneity of space and value from Michel Foucault and to research sustainable development strategy in ‘New Media Art Village’ heterogeneity from Jean Baudrillard, Marshall McLuhan's media culture theory and social construction ideology. It is possible to find a new media group that can convey ‘Visual Culture Art Education’ around the National Taipei University of Technology in this commercial district that combines intelligent technology, fashion, media, entertainment, art education, and marketing network. Let the imagination and innovation of ‘New Media Art Village’ become ‘implementable’ and new media Heterotopia of inter-subjectivity with the engagement of big data and digital media. Visual culture art education will also bring aesthetics into the community by New Media Art Village.

Keywords: social construction, heterogeneity, new media, big data, visual culture art education

Procedia PDF Downloads 249
25958 Landscape Classification in North of Jordan by Integrated Approach of Remote Sensing and Geographic Information Systems

Authors: Taleb Odeh, Nizar Abu-Jaber, Nour Khries

Abstract:

The southern part of Wadi Al Yarmouk catchment area covers north of Jordan. It locates within latitudes 32° 20’ to 32° 45’N and longitudes 35° 42’ to 36° 23’ E and has an area of about 1426 km2. However, it has high relief topography where the elevation varies between 50 to 1100 meter above sea level. The variations in the topography causes different units of landforms, climatic zones, land covers and plant species. As a results of these different landscapes units exists in that region. Spatial planning is a major challenge in such a vital area for Jordan which could not be achieved without determining landscape units. However, an integrated approach of remote sensing and geographic information Systems (GIS) is an optimized tool to investigate and map landscape units of such a complicated area. Remote sensing has the capability to collect different land surface data, of large landscape areas, accurately and in different time periods. GIS has the ability of storage these land surface data, analyzing them spatially and present them in form of professional maps. We generated a geo-land surface data that include land cover, rock units, soil units, plant species and digital elevation model using ASTER image and Google Earth while analyzing geo-data spatially were done by ArcGIS 10.2 software. We found that there are twenty two different landscape units in the study area which they have to be considered for any spatial planning in order to avoid and environmental problems.

Keywords: landscape, spatial planning, GIS, spatial analysis, remote sensing

Procedia PDF Downloads 531
25957 Damage Analysis in Open Hole Composite Specimens by Acoustic Emission: Experimental Investigation

Authors: Youcef Faci, Ahmed Mebtouche, Badredine Maalem

Abstract:

n the present work, an experimental study is carried out using acoustic emission and DIC techniques to analyze the damage of open hole woven composite carbon/epoxy under solicitations. Damage mechanisms were identified based on acoustic emission parameters such as amplitude, energy, and cumulative account. The findings of the AE measurement were successfully identified by digital image correlation (DIC) measurements. The evolution value of bolt angle inclination during tensile tests was studied and analyzed. Consequently, the relationship between the bolt inclination angles during tensile tests associated with failure modes of fastened joints of composite materials is determined. Moreover, there is an interaction between laminate pattern, laminate thickness, fastener size and type, surface strain concentrations, and out-of-plane displacement. Conclusions are supported by microscopic visualizations of the composite specimen.

Keywords: tensile test, damage, acoustic emission, digital image correlation

Procedia PDF Downloads 73
25956 Developing Three-Dimensional Digital Image Correlation Method to Detect the Crack Variation at the Joint of Weld Steel Plate

Authors: Ming-Hsiang Shih, Wen-Pei Sung, Shih-Heng Tung

Abstract:

The purposes of hydraulic gate are to maintain the functions of storing and draining water. It bears long-term hydraulic pressure and earthquake force and is very important for reservoir and waterpower plant. The high tensile strength of steel plate is used as constructional material of hydraulic gate. The cracks and rusts, induced by the defects of material, bad construction and seismic excitation and under water respectively, thus, the mechanics phenomena of gate with crack are probing into the cause of stress concentration, induced high crack increase rate, affect the safety and usage of hydroelectric power plant. Stress distribution analysis is a very important and essential surveying technique to analyze bi-material and singular point problems. The finite difference infinitely small element method has been demonstrated, suitable for analyzing the buckling phenomena of welding seam and steel plate with crack. Especially, this method can easily analyze the singularity of kink crack. Nevertheless, the construction form and deformation shape of some gates are three-dimensional system. Therefore, the three-dimensional Digital Image Correlation (DIC) has been developed and applied to analyze the strain variation of steel plate with crack at weld joint. The proposed Digital image correlation (DIC) technique is an only non-contact method for measuring the variation of test object. According to rapid development of digital camera, the cost of this digital image correlation technique has been reduced. Otherwise, this DIC method provides with the advantages of widely practical application of indoor test and field test without the restriction on the size of test object. Thus, the research purpose of this research is to develop and apply this technique to monitor mechanics crack variations of weld steel hydraulic gate and its conformation under action of loading. The imagines can be picked from real time monitoring process to analyze the strain change of each loading stage. The proposed 3-Dimensional digital image correlation method, developed in the study, is applied to analyze the post-buckling phenomenon and buckling tendency of welded steel plate with crack. Then, the stress intensity of 3-dimensional analysis of different materials and enhanced materials in steel plate has been analyzed in this paper. The test results show that this proposed three-dimensional DIC method can precisely detect the crack variation of welded steel plate under different loading stages. Especially, this proposed DIC method can detect and identify the crack position and the other flaws of the welded steel plate that the traditional test methods hardly detect these kind phenomena. Therefore, this proposed three-dimensional DIC method can apply to observe the mechanics phenomena of composite materials subjected to loading and operating.

Keywords: welded steel plate, crack variation, three-dimensional digital image correlation (DIC), crack stel plate

Procedia PDF Downloads 520
25955 Ambivalence in Embracing Artificial Intelligence in the Units of a Public Hospital in South Africa

Authors: Sanele E. Nene L., Lia M. Hewitt

Abstract:

Background: Artificial intelligence (AI) has a high value in healthcare, various applications have been developed for the efficiency of clinical operations, such as appointment/surgery scheduling, diagnostic image analysis, prognosis, prediction and management of specific ailments. Purpose: The purpose of this study was to explore, describe, contrast, evaluate, and develop the various leadership strategies as a conceptual framework, applied by public health Operational Managers (OMs) to embrace AI benefits, with the aim to improve the healthcare system in a public hospital. Design and Method: A qualitative, exploratory, descriptive and contextual research design was followed and a descriptive phenomenological approach. Five phases were followed to conduct this study. Phenomenological individual interviews and focus groups were used to collect data and a phenomenological thematic data analysis method was used. Findings and conclusion: Three themes surfaced as the experiences of AI by the OMs; Positive experiences related to AI, Management and leadership processes in AI facilitation, and Challenges related to AI.

Keywords: ambivalence, embracing, Artificial intelligence, public hospital

Procedia PDF Downloads 80
25954 Land Use Change Detection Using Remote Sensing and GIS

Authors: Naser Ahmadi Sani, Karim Solaimani, Lida Razaghnia, Jalal Zandi

Abstract:

In recent decades, rapid and incorrect changes in land-use have been associated with consequences such as natural resources degradation and environmental pollution. Detecting changes in land-use is one of the tools for natural resource management and assessment of changes in ecosystems. The target of this research is studying the land-use changes in Haraz basin with an area of 677000 hectares in a 15 years period (1996 to 2011) using LANDSAT data. Therefore, the quality of the images was first evaluated. Various enhancement methods for creating synthetic bonds were used in the analysis. Separate training sites were selected for each image. Then the images of each period were classified in 9 classes using supervised classification method and the maximum likelihood algorithm. Finally, the changes were extracted in GIS environment. The results showed that these changes are an alarm for the HARAZ basin status in future. The reason is that 27% of the area has been changed, which is related to changing the range lands to bare land and dry farming and also changing the dense forest to sparse forest, horticulture, farming land and residential area.

Keywords: Haraz basin, change detection, land-use, satellite data

Procedia PDF Downloads 417
25953 Artificial Intelligence in Melanoma Prognosis: A Narrative Review

Authors: Shohreh Ghasemi

Abstract:

Introduction: Melanoma is a complex disease with various clinical and histopathological features that impact prognosis and treatment decisions. Traditional methods of melanoma prognosis involve manual examination and interpretation of clinical and histopathological data by dermatologists and pathologists. However, the subjective nature of these assessments can lead to inter-observer variability and suboptimal prognostic accuracy. AI, with its ability to analyze vast amounts of data and identify patterns, has emerged as a promising tool for improving melanoma prognosis. Methods: A comprehensive literature search was conducted to identify studies that employed AI techniques for melanoma prognosis. The search included databases such as PubMed and Google Scholar, using keywords such as "artificial intelligence," "melanoma," and "prognosis." Studies published between 2010 and 2022 were considered. The selected articles were critically reviewed, and relevant information was extracted. Results: The review identified various AI methodologies utilized in melanoma prognosis, including machine learning algorithms, deep learning techniques, and computer vision. These techniques have been applied to diverse data sources, such as clinical images, dermoscopy images, histopathological slides, and genetic data. Studies have demonstrated the potential of AI in accurately predicting melanoma prognosis, including survival outcomes, recurrence risk, and response to therapy. AI-based prognostic models have shown comparable or even superior performance compared to traditional methods.

Keywords: artificial intelligence, melanoma, accuracy, prognosis prediction, image analysis, personalized medicine

Procedia PDF Downloads 84
25952 Array Type Miniaturized Ultrasonic Sensors for Detecting Sinkhole in the City

Authors: Won Young Choi, Kwan Kyu Park

Abstract:

Recently, the road depression happening in the urban area is different from the cause of the sink hole and the generation mechanism occurring in the limestone area. The main cause of sinkholes occurring in the city center is the loss of soil due to the damage of old underground buried materials and groundwater discharge due to large underground excavation works. The method of detecting the sinkhole in the urban area is mostly using the Ground Penetration Radar (GPR). However, it is challenging to implement compact system and detecting watery state since it is based on electromagnetic waves. Although many ultrasonic underground detection studies have been conducted, near-ground detection (several tens of cm to several meters) has been developed for bulk systems using geophones as a receiver. The goal of this work is to fabricate a miniaturized sinkhole detecting system based on low-cost ultrasonic transducers of 40 kHz resonant frequency with high transmission pressure and receiving sensitivity. Motived by biomedical ultrasonic imaging methods, we detect air layers below the ground such as asphalt through the pulse-echo method. To improve image quality using multi-channel, linear array system is implemented, and image is acquired by classical synthetic aperture imaging method. We present the successful feasibility test of multi-channel sinkhole detector based on ultrasonic transducer. In this work, we presented and analyzed image results which are imaged by single channel pulse-echo imaging, synthetic aperture imaging.

Keywords: road depression, sinkhole, synthetic aperture imaging, ultrasonic transducer

Procedia PDF Downloads 146
25951 Use of Machine Learning Algorithms to Pediatric MR Images for Tumor Classification

Authors: I. Stathopoulos, V. Syrgiamiotis, E. Karavasilis, A. Ploussi, I. Nikas, C. Hatzigiorgi, K. Platoni, E. P. Efstathopoulos

Abstract:

Introduction: Brain and central nervous system (CNS) tumors form the second most common group of cancer in children, accounting for 30% of all childhood cancers. MRI is the key imaging technique used for the visualization and management of pediatric brain tumors. Initial characterization of tumors from MRI scans is usually performed via a radiologist’s visual assessment. However, different brain tumor types do not always demonstrate clear differences in visual appearance. Using only conventional MRI to provide a definite diagnosis could potentially lead to inaccurate results, and so histopathological examination of biopsy samples is currently considered to be the gold standard for obtaining definite diagnoses. Machine learning is defined as the study of computational algorithms that can use, complex or not, mathematical relationships and patterns from empirical and scientific data to make reliable decisions. Concerning the above, machine learning techniques could provide effective and accurate ways to automate and speed up the analysis and diagnosis for medical images. Machine learning applications in radiology are or could potentially be useful in practice for medical image segmentation and registration, computer-aided detection and diagnosis systems for CT, MR or radiography images and functional MR (fMRI) images for brain activity analysis and neurological disease diagnosis. Purpose: The objective of this study is to provide an automated tool, which may assist in the imaging evaluation and classification of brain neoplasms in pediatric patients by determining the glioma type, grade and differentiating between different brain tissue types. Moreover, a future purpose is to present an alternative way of quick and accurate diagnosis in order to save time and resources in the daily medical workflow. Materials and Methods: A cohort, of 80 pediatric patients with a diagnosis of posterior fossa tumor, was used: 20 ependymomas, 20 astrocytomas, 20 medulloblastomas and 20 healthy children. The MR sequences used, for every single patient, were the following: axial T1-weighted (T1), axial T2-weighted (T2), FluidAttenuated Inversion Recovery (FLAIR), axial diffusion weighted images (DWI), axial contrast-enhanced T1-weighted (T1ce). From every sequence only a principal slice was used that manually traced by two expert radiologists. Image acquisition was carried out on a GE HDxt 1.5-T scanner. The images were preprocessed following a number of steps including noise reduction, bias-field correction, thresholding, coregistration of all sequences (T1, T2, T1ce, FLAIR, DWI), skull stripping, and histogram matching. A large number of features for investigation were chosen, which included age, tumor shape characteristics, image intensity characteristics and texture features. After selecting the features for achieving the highest accuracy using the least number of variables, four machine learning classification algorithms were used: k-Nearest Neighbour, Support-Vector Machines, C4.5 Decision Tree and Convolutional Neural Network. The machine learning schemes and the image analysis are implemented in the WEKA platform and MatLab platform respectively. Results-Conclusions: The results and the accuracy of images classification for each type of glioma by the four different algorithms are still on process.

Keywords: image classification, machine learning algorithms, pediatric MRI, pediatric oncology

Procedia PDF Downloads 150
25950 Low Cost Webcam Camera and GNSS Integration for Updating Home Data Using AI Principles

Authors: Mohkammad Nur Cahyadi, Hepi Hapsari Handayani, Agus Budi Raharjo, Ronny Mardianto, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan

Abstract:

PDAM (local water company) determines customer charges by considering the customer's building or house. Charges determination significantly affects PDAM income and customer costs because the PDAM applies a subsidy policy for customers classified as small households. Periodic updates are needed so that pricing is in line with the target. A thorough customer survey in Surabaya is needed to update customer building data. However, the survey that has been carried out so far has been by deploying officers to conduct one-by-one surveys for each PDAM customer. Surveys with this method require a lot of effort and cost. For this reason, this research offers a technology called moblie mapping, a mapping method that is more efficient in terms of time and cost. The use of this tool is also quite simple, where the device will be installed in the car so that it can record the surrounding buildings while the car is running. Mobile mapping technology generally uses lidar sensors equipped with GNSS, but this technology requires high costs. In overcoming this problem, this research develops low-cost mobile mapping technology using a webcam camera sensor added to the GNSS and IMU sensors. The camera used has specifications of 3MP with a resolution of 720 and a diagonal field of view of 78⁰. The principle of this invention is to integrate four camera sensors, a GNSS webcam, and GPS to acquire photo data, which is equipped with location data (latitude, longitude) and IMU (roll, pitch, yaw). This device is also equipped with a tripod and a vacuum cleaner to attach to the car's roof so it doesn't fall off while running. The output data from this technology will be analyzed with artificial intelligence to reduce similar data (Cosine Similarity) and then classify building types. Data reduction is used to eliminate similar data and maintain the image that displays the complete house so that it can be processed for later classification of buildings. The AI method used is transfer learning by utilizing a trained model named VGG-16. From the analysis of similarity data, it was found that the data reduction reached 50%. Then georeferencing is done using the Google Maps API to get address information according to the coordinates in the data. After that, geographic join is done to link survey data with customer data already owned by PDAM Surya Sembada Surabaya.

Keywords: mobile mapping, GNSS, IMU, similarity, classification

Procedia PDF Downloads 84
25949 Best Timing for Capturing Satellite Thermal Images, Asphalt, and Concrete Objects

Authors: Toufic Abd El-Latif Sadek

Abstract:

The asphalt object represents the asphalted areas like roads, and the concrete object represents the concrete areas like concrete buildings. The efficient extraction of asphalt and concrete objects from one satellite thermal image occurred at a specific time, by preventing the gaps in times which give the close and same brightness values between asphalt and concrete, and among other objects. So that to achieve efficient extraction and then better analysis. Seven sample objects were used un this study, asphalt, concrete, metal, rock, dry soil, vegetation, and water. It has been found that, the best timing for capturing satellite thermal images to extract the two objects asphalt and concrete from one satellite thermal image, saving time and money, occurred at a specific time in different months. A table is deduced shows the optimal timing for capturing satellite thermal images to extract effectively these two objects.

Keywords: asphalt, concrete, satellite thermal images, timing

Procedia PDF Downloads 323
25948 Parallel Version of Reinhard’s Color Transfer Algorithm

Authors: Abhishek Bhardwaj, Manish Kumar Bajpai

Abstract:

An image with its content and schema of colors presents an effective mode of information sharing and processing. By changing its color schema different visions and prospect are discovered by the users. This phenomenon of color transfer is being used by Social media and other channel of entertainment. Reinhard et al’s algorithm was the first one to solve this problem of color transfer. In this paper, we make this algorithm efficient by introducing domain parallelism among different processors. We also comment on the factors that affect the speedup of this problem. In the end by analyzing the experimental data we claim to propose a novel and efficient parallel Reinhard’s algorithm.

Keywords: Reinhard et al’s algorithm, color transferring, parallelism, speedup

Procedia PDF Downloads 616
25947 Implementation of an IoT Sensor Data Collection and Analysis Library

Authors: Jihyun Song, Kyeongjoo Kim, Minsoo Lee

Abstract:

Due to the development of information technology and wireless Internet technology, various data are being generated in various fields. These data are advantageous in that they provide real-time information to the users themselves. However, when the data are accumulated and analyzed, more various information can be extracted. In addition, development and dissemination of boards such as Arduino and Raspberry Pie have made it possible to easily test various sensors, and it is possible to collect sensor data directly by using database application tools such as MySQL. These directly collected data can be used for various research and can be useful as data for data mining. However, there are many difficulties in using the board to collect data, and there are many difficulties in using it when the user is not a computer programmer, or when using it for the first time. Even if data are collected, lack of expert knowledge or experience may cause difficulties in data analysis and visualization. In this paper, we aim to construct a library for sensor data collection and analysis to overcome these problems.

Keywords: clustering, data mining, DBSCAN, k-means, k-medoids, sensor data

Procedia PDF Downloads 381
25946 VIAN-DH: Computational Multimodal Conversation Analysis Software and Infrastructure

Authors: Teodora Vukovic, Christoph Hottiger, Noah Bubenhofer

Abstract:

The development of VIAN-DH aims at bridging two linguistic approaches: conversation analysis/interactional linguistics (IL), so far a dominantly qualitative field, and computational/corpus linguistics and its quantitative and automated methods. Contemporary IL investigates the systematic organization of conversations and interactions composed of speech, gaze, gestures, and body positioning, among others. These highly integrated multimodal behaviour is analysed based on video data aimed at uncovering so called “multimodal gestalts”, patterns of linguistic and embodied conduct that reoccur in specific sequential positions employed for specific purposes. Multimodal analyses (and other disciplines using videos) are so far dependent on time and resource intensive processes of manual transcription of each component from video materials. Automating these tasks requires advanced programming skills, which is often not in the scope of IL. Moreover, the use of different tools makes the integration and analysis of different formats challenging. Consequently, IL research often deals with relatively small samples of annotated data which are suitable for qualitative analysis but not enough for making generalized empirical claims derived quantitatively. VIAN-DH aims to create a workspace where many annotation layers required for the multimodal analysis of videos can be created, processed, and correlated in one platform. VIAN-DH will provide a graphical interface that operates state-of-the-art tools for automating parts of the data processing. The integration of tools that already exist in computational linguistics and computer vision, facilitates data processing for researchers lacking programming skills, speeds up the overall research process, and enables the processing of large amounts of data. The main features to be introduced are automatic speech recognition for the transcription of language, automatic image recognition for extraction of gestures and other visual cues, as well as grammatical annotation for adding morphological and syntactic information to the verbal content. In the ongoing instance of VIAN-DH, we focus on gesture extraction (pointing gestures, in particular), making use of existing models created for sign language and adapting them for this specific purpose. In order to view and search the data, VIAN-DH will provide a unified format and enable the import of the main existing formats of annotated video data and the export to other formats used in the field, while integrating different data source formats in a way that they can be combined in research. VIAN-DH will adapt querying methods from corpus linguistics to enable parallel search of many annotation levels, combining token-level and chronological search for various types of data. VIAN-DH strives to bring crucial and potentially revolutionary innovation to the field of IL, (that can also extend to other fields using video materials). It will allow the processing of large amounts of data automatically and, the implementation of quantitative analyses, combining it with the qualitative approach. It will facilitate the investigation of correlations between linguistic patterns (lexical or grammatical) with conversational aspects (turn-taking or gestures). Users will be able to automatically transcribe and annotate visual, spoken and grammatical information from videos, and to correlate those different levels and perform queries and analyses.

Keywords: multimodal analysis, corpus linguistics, computational linguistics, image recognition, speech recognition

Procedia PDF Downloads 111
25945 New Method to Increase Contrast of Electromicrograph of Rat Tissues Sections

Authors: Lise Paule Labéjof, Raíza Sales Pereira Bizerra, Galileu Barbosa Costa, Thaísa Barros dos Santos

Abstract:

Since the beginning of the microscopy, improving the image quality has always been a concern of its users. Especially for transmission electron microscopy (TEM), the problem is even more important due to the complexity of the sample preparation technique and the many variables that can affect the conservation of structures, proper operation of the equipment used and then the quality of the images obtained. Animal tissues being transparent it is necessary to apply a contrast agent in order to identify the elements of their ultrastructural morphology. Several methods of contrastation of tissues for TEM imaging have already been developed. The most used are the “in block” contrastation and “in situ” contrastation. This report presents an alternative technique of application of contrast agent in vivo, i.e. before sampling. By this new method the electromicrographies of the tissue sections have better contrast compared to that in situ and present no artefact of precipitation of contrast agent. Another advantage is that a small amount of contrast is needed to get a good result given that most of them are expensive and extremely toxic.

Keywords: image quality, microscopy research, staining technique, ultra thin section

Procedia PDF Downloads 436
25944 Development of a Mobile Image-Based Reminder Application to Support Tuberculosis Treatment in Africa

Authors: Haji Ali Haji, Hussein Suleman, Ulrike Rivett

Abstract:

This paper presents the design, development and evaluation of an application prototype developed to support tuberculosis (TB) patients’ treatment adherence. The system makes use of graphics and voice reminders as opposed to text messaging to encourage patients to follow their medication routine. To evaluate the effect of the prototype applications, participants were given mobile phones on which the reminder system was installed. Thirty-eight people, including TB health workers and patients from Zanzibar, Tanzania, participated in the evaluation exercises. The results indicate that the participants found the mobile graphic-based application is useful to support TB treatment. All participants understood and interpreted the intended meaning of every image correctly. The study findings revealed that the use of a mobile visual-based application may have potential benefit to support TB patients (both literate and illiterate) in their treatment processes.

Keywords: ICT4D, mobile technology, tuberculosis, visual-based reminder

Procedia PDF Downloads 432
25943 Government (Big) Data Ecosystem: Definition, Classification of Actors, and Their Roles

Authors: Syed Iftikhar Hussain Shah, Vasilis Peristeras, Ioannis Magnisalis

Abstract:

Organizations, including governments, generate (big) data that are high in volume, velocity, veracity, and come from a variety of sources. Public Administrations are using (big) data, implementing base registries, and enforcing data sharing within the entire government to deliver (big) data related integrated services, provision of insights to users, and for good governance. Government (Big) data ecosystem actors represent distinct entities that provide data, consume data, manipulate data to offer paid services, and extend data services like data storage, hosting services to other actors. In this research work, we perform a systematic literature review. The key objectives of this paper are to propose a robust definition of government (big) data ecosystem and a classification of government (big) data ecosystem actors and their roles. We showcase a graphical view of actors, roles, and their relationship in the government (big) data ecosystem. We also discuss our research findings. We did not find too much published research articles about the government (big) data ecosystem, including its definition and classification of actors and their roles. Therefore, we lent ideas for the government (big) data ecosystem from numerous areas that include scientific research data, humanitarian data, open government data, industry data, in the literature.

Keywords: big data, big data ecosystem, classification of big data actors, big data actors roles, definition of government (big) data ecosystem, data-driven government, eGovernment, gaps in data ecosystems, government (big) data, public administration, systematic literature review

Procedia PDF Downloads 165
25942 Deep Learning-Based Liver 3D Slicer for Image-Guided Therapy: Segmentation and Needle Aspiration

Authors: Ahmedou Moulaye Idriss, Tfeil Yahya, Tamas Ungi, Gabor Fichtinger

Abstract:

Image-guided therapy (IGT) plays a crucial role in minimally invasive procedures for liver interventions. Accurate segmentation of the liver and precise needle placement is essential for successful interventions such as needle aspiration. In this study, we propose a deep learning-based liver 3D slicer designed to enhance segmentation accuracy and facilitate needle aspiration procedures. The developed 3D slicer leverages state-of-the-art convolutional neural networks (CNNs) for automatic liver segmentation in medical images. The CNN model is trained on a diverse dataset of liver images obtained from various imaging modalities, including computed tomography (CT) and magnetic resonance imaging (MRI). The trained model demonstrates robust performance in accurately delineating liver boundaries, even in cases with anatomical variations and pathological conditions. Furthermore, the 3D slicer integrates advanced image registration techniques to ensure accurate alignment of preoperative images with real-time interventional imaging. This alignment enhances the precision of needle placement during aspiration procedures, minimizing the risk of complications and improving overall intervention outcomes. To validate the efficacy of the proposed deep learning-based 3D slicer, a comprehensive evaluation is conducted using a dataset of clinical cases. Quantitative metrics, including the Dice similarity coefficient and Hausdorff distance, are employed to assess the accuracy of liver segmentation. Additionally, the performance of the 3D slicer in guiding needle aspiration procedures is evaluated through simulated and clinical interventions. Preliminary results demonstrate the effectiveness of the developed 3D slicer in achieving accurate liver segmentation and guiding needle aspiration procedures with high precision. The integration of deep learning techniques into the IGT workflow shows great promise for enhancing the efficiency and safety of liver interventions, ultimately contributing to improved patient outcomes.

Keywords: deep learning, liver segmentation, 3D slicer, image guided therapy, needle aspiration

Procedia PDF Downloads 53
25941 Photomicrograph-Based Neuropathology Consultation in Tanzania; The Utility of Static-Image Neurotelepathology in Low- And Middle-Income Countries

Authors: Francis Zerd, Brian E. Moore, Atuganile E. Malango, Patrick W. Hosokawa, Kevin O. Lillehei, Laurence Lemery Mchome, D. Ryan Ormond

Abstract:

Introduction: Since neuropathologic diagnosis in the developing world is hampered by limitations in technical infrastructure, trained laboratory personnel, and subspecialty-trained pathologists, the use of telepathology for diagnostic support, second-opinion consultations, and ongoing training holds promise as a means of addressing these challenges. This research aims to assess the utility of static teleneuropathology in improving neuropathologic diagnoses in low- and middle-income countries. Methods: Consecutive neurosurgical biopsy and resection specimens obtained at Muhimbili National Hospital in Tanzania between July 1, 2018, and June 30, 2019, were selected for retrospective, blinded static-image neuropathologic review followed by on-site review by an expert neuropathologist. Results: A total of 75 neuropathologic cases were reviewed. The agreement of static images and on-site glass diagnosis was 71% with strict criteria and 88% with less stringent criteria. This represents an overall improvement in diagnostic accuracy from 36% by general pathologists to 71% by a neuropathologist using static telepathology (or 76% to 88% with less stringent criteria). Conclusions: Telepathology offers a suitable means of providing diagnostic support, second-opinion consultations, and ongoing training to pathologists practicing in resource-limited countries. Moreover, static digital teleneuropathology is an uncomplicated, cost-effective, and reliable way to achieve these goals.

Keywords: neuropathology, resource-limited settings, static image, Tanzania, teleneuropathology

Procedia PDF Downloads 104
25940 Multimodality in Storefront Windows: The Impact of Verbo-Visual Design on Consumer Behavior

Authors: Angela Bargenda, Erhard Lick, Dhoha Trabelsi

Abstract:

Research in retailing has identified the importance of atmospherics as an essential element in enhancing store image, store patronage intentions, and the overall shopping experience in a retail environment. However, in the area of atmospherics, store window design, which represents an essential component of external store atmospherics, remains a vastly underrepresented phenomenon in extant scholarship. This paper seeks to fill this gap by exploring the relevance of store window design as an atmospheric tool. In particular, empirical evidence of theme-based theatrical store front windows, which put emphasis on the use of verbo-visual design elements, was found in Paris and New York. The purpose of this study was to identify to what extent such multimodal window designs of high-end department stores in metropolitan cities have an impact on store entry decisions and attitudes towards the retailer’s image. As theoretical construct, the linguistic concept of multimodality and Mehrabian’s and Russell’s model in environmental psychology were applied. To answer the research question, two studies were conducted. For Study 1 a case study approach was selected to define three different types of store window designs based on different types of visual-verbal relations. Each of these types of store window design represented a different level of cognitive elaboration required for the decoding process. Study 2 consisted of an on-line survey carried out among more than 300 respondents to examine the influence of these three types of store window design on the consumer behavioral variables mentioned above. The results of this study show that the higher the cognitive elaboration needed to decode the message of the store window, the lower the store entry propensity. In contrast, the higher the cognitive elaboration, the higher the perceived image of the retailer’s image. One important conclusion is that in order to increase consumers’ propensity to enter stores with theme-based theatrical store front windows, retailers need to limit the cognitive elaboration required to decode their verbo-visual window design.

Keywords: consumer behavior, multimodality, store atmospherics, store window design

Procedia PDF Downloads 205
25939 A Neural Network Classifier for Identifying Duplicate Image Entries in Real-Estate Databases

Authors: Sergey Ermolin, Olga Ermolin

Abstract:

A Deep Convolution Neural Network with Triplet Loss is used to identify duplicate images in real-estate advertisements in the presence of image artifacts such as watermarking, cropping, hue/brightness adjustment, and others. The effects of batch normalization, spatial dropout, and various convergence methodologies on the resulting detection accuracy are discussed. For comparative Return-on-Investment study (per industry request), end-2-end performance is benchmarked on both Nvidia Titan GPUs and Intel’s Xeon CPUs. A new real-estate dataset from San Francisco Bay Area is used for this work. Sufficient duplicate detection accuracy is achieved to supplement other database-grounded methods of duplicate removal. The implemented method is used in a Proof-of-Concept project in the real-estate industry.

Keywords: visual recognition, convolutional neural networks, triplet loss, spatial batch normalization with dropout, duplicate removal, advertisement technologies, performance benchmarking

Procedia PDF Downloads 341
25938 A Hybrid Watermarking Model Based on Frequency of Occurrence

Authors: Hamza A. A. Al-Sewadi, Adnan H. M. Al-Helali, Samaa A. K. Khamis

Abstract:

Ownership proofs of multimedia such as text, image, audio or video files can be achieved by the burial of watermark is them. It is achieved by introducing modifications into these files that are imperceptible to the human senses but easily recoverable by a computer program. These modifications would be in the time domain or frequency domain or both. This paper presents a procedure for watermarking by mixing amplitude modulation with frequency transformation histogram; namely a specific value is used to modulate the intensity component Y of the YIQ components of the carrier image. This scheme is referred to as histogram embedding technique (HET). Results comparison with those of other techniques such as discrete wavelet transform (DWT), discrete cosine transform (DCT) and singular value decomposition (SVD) have shown an enhance efficiency in terms of ease and performance. It has manifested a good degree of robustness against various environment effects such as resizing, rotation and different kinds of noise. This method would prove very useful technique for copyright protection and ownership judgment.

Keywords: authentication, copyright protection, information hiding, ownership, watermarking

Procedia PDF Downloads 566