Search results for: Images in Performances
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3340

Search results for: Images in Performances

3040 A Neural Network Classifier for Estimation of the Degree of Infestation by Late Blight on Tomato Leaves

Authors: Gizelle K. Vianna, Gabriel V. Cunha, Gustavo S. Oliveira

Abstract:

Foliage diseases in plants can cause a reduction in both quality and quantity of agricultural production. Intelligent detection of plant diseases is an essential research topic as it may help monitoring large fields of crops by automatically detecting the symptoms of foliage diseases. This work investigates ways to recognize the late blight disease from the analysis of tomato digital images, collected directly from the field. A pair of multilayer perceptron neural network analyzes the digital images, using data from both RGB and HSL color models, and classifies each image pixel. One neural network is responsible for the identification of healthy regions of the tomato leaf, while the other identifies the injured regions. The outputs of both networks are combined to generate the final classification of each pixel from the image and the pixel classes are used to repaint the original tomato images by using a color representation that highlights the injuries on the plant. The new images will have only green, red or black pixels, if they came from healthy or injured portions of the leaf, or from the background of the image, respectively. The system presented an accuracy of 97% in detection and estimation of the level of damage on the tomato leaves caused by late blight.

Keywords: artificial neural networks, digital image processing, pattern recognition, phytosanitary

Procedia PDF Downloads 309
3039 An Improved Sub-Nyquist Sampling Jamming Method for Deceiving Inverse Synthetic Aperture Radar

Authors: Yanli Qi, Ning Lv, Jing Li

Abstract:

Sub-Nyquist sampling jamming method (SNSJ) is a well known deception jamming method for inverse synthetic aperture radar (ISAR). However, the anti-decoy of the SNSJ method performs easier since the amplitude of the false-target images are weaker than the real-target image; the false-target images always lag behind the real-target image, and all targets are located in the same cross-range. In order to overcome the drawbacks mentioned above, a simple modulation based on SNSJ (M-SNSJ) is presented in this paper. The method first uses amplitude modulation factor to make the amplitude of the false-target images consistent with the real-target image, then uses the down-range modulation factor and cross-range modulation factor to make the false-target images move freely in down-range and cross-range, respectively, thus the capacity of deception is improved. Finally, the simulation results on the six available combinations of three modulation factors are given to illustrate our conclusion.

Keywords: inverse synthetic aperture radar (ISAR), deceptive jamming, Sub-Nyquist sampling jamming method (SNSJ), modulation based on Sub-Nyquist sampling jamming method (M-SNSJ)

Procedia PDF Downloads 191
3038 Multi-Stage Classification for Lung Lesion Detection on CT Scan Images Applying Medical Image Processing Technique

Authors: Behnaz Sohani, Sahand Shahalinezhad, Amir Rahmani, Aliyu Aliyu

Abstract:

Recently, medical imaging and specifically medical image processing is becoming one of the most dynamically developing areas of medical science. It has led to the emergence of new approaches in terms of the prevention, diagnosis, and treatment of various diseases. In the process of diagnosis of lung cancer, medical professionals rely on computed tomography (CT) scans, in which failure to correctly identify masses can lead to incorrect diagnosis or sampling of lung tissue. Identification and demarcation of masses in terms of detecting cancer within lung tissue are critical challenges in diagnosis. In this work, a segmentation system in image processing techniques has been applied for detection purposes. Particularly, the use and validation of a novel lung cancer detection algorithm have been presented through simulation. This has been performed employing CT images based on multilevel thresholding. The proposed technique consists of segmentation, feature extraction, and feature selection and classification. More in detail, the features with useful information are selected after featuring extraction. Eventually, the output image of lung cancer is obtained with 96.3% accuracy and 87.25%. The purpose of feature extraction applying the proposed approach is to transform the raw data into a more usable form for subsequent statistical processing. Future steps will involve employing the current feature extraction method to achieve more accurate resulting images, including further details available to machine vision systems to recognise objects in lung CT scan images.

Keywords: lung cancer detection, image segmentation, lung computed tomography (CT) images, medical image processing

Procedia PDF Downloads 68
3037 An Efficient Fundamental Matrix Estimation for Moving Object Detection

Authors: Yeongyu Choi, Ju H. Park, S. M. Lee, Ho-Youl Jung

Abstract:

In this paper, an improved method for estimating fundamental matrix is proposed. The method is applied effectively to monocular camera based moving object detection. The method consists of corner points detection, moving object’s motion estimation and fundamental matrix calculation. The corner points are obtained by using Harris corner detector, motions of moving objects is calculated from pyramidal Lucas-Kanade optical flow algorithm. Through epipolar geometry analysis using RANSAC, the fundamental matrix is calculated. In this method, we have improved the performances of moving object detection by using two threshold values that determine inlier or outlier. Through the simulations, we compare the performances with varying the two threshold values.

Keywords: corner detection, optical flow, epipolar geometry, RANSAC

Procedia PDF Downloads 381
3036 Preliminary Evaluation of Maximum Intensity Projection SPECT Imaging for Whole Body Tc-99m Hydroxymethylene Diphosphonate Bone Scanning

Authors: Yasuyuki Takahashi, Hirotaka Shimada, Kyoko Saito

Abstract:

Bone scintigraphy is widely used as a screening tool for bone metastases. However, the 180 to 240 minutes (min) waiting time after the intravenous (i.v.) injection of the tracer is both long and tiresome. To solve this shortcoming, a bone scan with a shorter waiting time is needed. In this study, we applied the Maximum Intensity Projection (MIP) and triple energy window (TEW) scatter correction to a whole body bone SPECT (Merged SPECT) and investigated shortening the waiting time. Methods: In a preliminary phantom study, hot gels of 99mTc-HMDP were inserted into sets of rods with diameters ranging from 4 to 19 mm. Each rod set covered a sector of a cylindrical phantom. The activity concentration of all rods was 2.5 times that of the background in the cylindrical body of the phantom. In the human study, SPECT images were obtained from chest to abdomen at 30 to 180 min after 99mTc- hydroxymethylene diphosphonate (HMDP) injection of healthy volunteers. For both studies, MIP images were reconstructed. Planar whole body images of the patients were also obtained. These were acquired at 200 min. The image quality of the SPECT and the planar images was compared. Additionally, 36 patients with breast cancer were scanned in the same way. The delectability of uptake regions (metastases) was compared visually. Results: In the phantom study, a 4 mm size hot gel was difficult to depict on the conventional SPECT, but MIP images could recognize it clearly. For both the healthy volunteers and the clinical patients, the accumulation of 99mTc-HMDP in the SPECT was good as early as 90 min. All findings of both image sets were in agreement. Conclusion: In phantoms, images from MIP with TEW scatter correction could detect all rods down to those with a diameter of 4 mm. In patients, MIP reconstruction with TEW scatter correction could improve the detectability of hot lesions. In addition, the time between injection and imaging could be shortened from that conventionally used for whole body scans.

Keywords: merged SPECT, MIP, TEW scatter correction, 99mTc-HMDP

Procedia PDF Downloads 394
3035 Study on Energy Performance Comparison of Information Centric Network Based on Difference of Network Architecture

Authors: Takumi Shindo, Koji Okamura

Abstract:

The first generation of the wide area network was circuit centric network. How the optimal circuit can be signed was the most important issue to get the best performance. This architecture had succeeded for line based telephone system. The second generation was host centric network and Internet based on this architecture has very succeeded world widely. And Internet became as new social infrastructure. Currently the architecture of the network is based on the location of the information. This future network is called Information centric network (ICN). The information-centric network (ICN) has being researched by many projects and different architectures for implementation of ICN have been proposed. The goal of this study is to compare performances of those ICN architectures. In this paper, the authors propose general ICN model which can represent two typical ICN architectures and compare communication performances using request routing. Finally, simulation results are shown. Also, we assume that this network architecture should be adapt to energy on-demand routing.

Keywords: ICN, information centric network, CCN, energy

Procedia PDF Downloads 313
3034 Evaluating the Effectiveness of Electronic Response Systems in Technology-Oriented Classes

Authors: Ahmad Salman

Abstract:

Electronic Response Systems such as Kahoot, Poll Everywhere, and Google Classroom are gaining a lot of popularity when surveying audiences in events, meetings, and classroom. The reason is mainly because of the ease of use and the convenience these tools bring since they provide mobile applications with a simple user interface. In this paper, we present a case study on the effectiveness of using Electronic Response Systems on student participation and learning experience in a classroom. We use a polling application for class exercises in two different technology-oriented classes. We evaluate the effectiveness of the usage of the polling applications through statistical analysis of the students performance in these two classes and compare them to the performances of students who took the same classes without using the polling application for class participation. Our results show an increase in the performances of the students who used the Electronic Response System when compared to those who did not by an average of 11%.

Keywords: Interactive Learning, Classroom Technology, Electronic Response Systems, Polling Applications, Learning Evaluation

Procedia PDF Downloads 108
3033 Design and Implementation of Partial Denoising Boundary Image Matching Using Indexing Techniques

Authors: Bum-Soo Kim, Jin-Uk Kim

Abstract:

In this paper, we design and implement a partial denoising boundary image matching system using indexing techniques. Converting boundary images to time-series makes it feasible to perform fast search using indexes even on a very large image database. Thus, using this converting method we develop a client-server system based on the previous partial denoising research in the GUI (graphical user interface) environment. The client first converts a query image given by a user to a time-series and sends denoising parameters and the tolerance with this time-series to the server. The server identifies similar images from the index by evaluating a range query, which is constructed using inputs given from the client, and sends the resulting images to the client. Experimental results show that our system provides much intuitive and accurate matching result.

Keywords: boundary image matching, indexing, partial denoising, time-series matching

Procedia PDF Downloads 118
3032 Assisting Dating of Greek Papyri Images with Deep Learning

Authors: Asimina Paparrigopoulou, John Pavlopoulos, Maria Konstantinidou

Abstract:

Dating papyri accurately is crucial not only to editing their texts but also for our understanding of palaeography and the history of writing, ancient scholarship, material culture, networks in antiquity, etc. Most ancient manuscripts offer little evidence regarding the time of their production, forcing papyrologists to date them on palaeographical grounds, a method often criticized for its subjectivity. By experimenting with data obtained from the Collaborative Database of Dateable Greek Bookhands and the PapPal online collections of objectively dated Greek papyri, this study shows that deep learning dating models, pre-trained on generic images, can achieve accurate chronological estimates for a test subset (67,97% accuracy for book hands and 55,25% for documents). To compare the estimates of these models with those of humans, experts were asked to complete a questionnaire with samples of literary and documentary hands that had to be sorted chronologically by century. The same samples were dated by the models in question. The results are presented and analysed.

Keywords: image classification, papyri images, dating

Procedia PDF Downloads 60
3031 FMR1 Gene Carrier Screening for Premature Ovarian Insufficiency in Females: An Indian Scenario

Authors: Sarita Agarwal, Deepika Delsa Dean

Abstract:

Like the task of transferring photo images to artistic images, image-to-image translation aims to translate the data to the imitated data which belongs to the target domain. Neural Style Transfer and CycleGAN are two well-known deep learning architectures used for photo image-to-art image transfer. However, studies involving these two models concentrate on one-to-one domain translation, not one-to-multi domains translation. Our study tries to investigate deep learning architectures, which can be controlled to yield multiple artistic style translation only by adding a conditional vector. We have expanded CycleGAN and constructed Conditional CycleGAN for 5 kinds of categories translation. Our study found that the architecture inserting conditional vector into the middle layer of the Generator could output multiple artistic images.

Keywords: genetic counseling, FMR1 gene, fragile x-associated primary ovarian insufficiency, premutation

Procedia PDF Downloads 101
3030 GPU Based High Speed Error Protection for Watermarked Medical Image Transmission

Authors: Md Shohidul Islam, Jongmyon Kim, Ui-pil Chong

Abstract:

Medical image is an integral part of e-health care and e-diagnosis system. Medical image watermarking is widely used to protect patients’ information from malicious alteration and manipulation. The watermarked medical images are transmitted over the internet among patients, primary and referred physicians. The images are highly prone to corruption in the wireless transmission medium due to various noises, deflection, and refractions. Distortion in the received images leads to faulty watermark detection and inappropriate disease diagnosis. To address the issue, this paper utilizes error correction code (ECC) with (8, 4) Hamming code in an existing watermarking system. In addition, we implement the high complex ECC on a graphics processing units (GPU) to accelerate and support real-time requirement. Experimental results show that GPU achieves considerable speedup over the sequential CPU implementation, while maintaining 100% ECC efficiency.

Keywords: medical image watermarking, e-health system, error correction, Hamming code, GPU

Procedia PDF Downloads 268
3029 Identifying the True Extend of Glioblastoma Based on Preoperative FLAIR Images

Authors: B. Shukir, L. Szivos, D. Kis, P. Barzo

Abstract:

Glioblastoma is the most malignant brain tumor. In general, the survival rate varies between (14-18) months. Glioblastoma consists a solid and infiltrative part. The standard therapeutic management of glioblastoma is maximum safe resection followed by chemo-radiotherapy. It’s hypothesized that the pretumoral hyperintense region in fluid attenuated inversion recovery (FLAIR) images includes both vasogenic edema and infiltrated tumor cells. In our study, we aimed to define the sensitivity and specificity of hyperintense FLAIR images preoperatively to examine how well it can define the true extent of glioblastoma. (16) glioblastoma patients included in this study. Hyperintense FLAIR region were delineated preoperatively as tumor mask. The infiltrative part of glioblastoma considered the regions where the tumor recurred on the follow up MRI. The recurrence on the CE-T1 images was marked as the recurrence masks. According to (AAL3) and (JHU white matter labels) atlas, the brain divided into cortical and subcortical regions respectively. For calculating specificity and sensitivity, the FLAIR and the recurrence masks overlapped counting how many regions affected by both . The average sensitivity and specificity was 83% and 85% respectively. Individually, the sensitivity and specificity varied between (31-100)%, and (100-58)% respectively. These results suggest that despite FLAIR being as an effective radiologic imaging tool its prognostic value remains controversial and probabilistic tractography remain more reliable available method for identifying the true extent of glioblastoma.

Keywords: brain tumors, glioblastoma, MRI, FLAIR

Procedia PDF Downloads 26
3028 Enhanced Image Representation for Deep Belief Network Classification of Hyperspectral Images

Authors: Khitem Amiri, Mohamed Farah

Abstract:

Image classification is a challenging task and is gaining lots of interest since it helps us to understand the content of images. Recently Deep Learning (DL) based methods gave very interesting results on several benchmarks. For Hyperspectral images (HSI), the application of DL techniques is still challenging due to the scarcity of labeled data and to the curse of dimensionality. Among other approaches, Deep Belief Network (DBN) based approaches gave a fair classification accuracy. In this paper, we address the problem of the curse of dimensionality by reducing the number of bands and replacing the HSI channels by the channels representing radiometric indices. Therefore, instead of using all the HSI bands, we compute the radiometric indices such as NDVI (Normalized Difference Vegetation Index), NDWI (Normalized Difference Water Index), etc, and we use the combination of these indices as input for the Deep Belief Network (DBN) based classification model. Thus, we keep almost all the pertinent spectral information while reducing considerably the size of the image. In order to test our image representation, we applied our method on several HSI datasets including the Indian pines dataset, Jasper Ridge data and it gave comparable results to the state of the art methods while reducing considerably the time of training and testing.

Keywords: hyperspectral images, deep belief network, radiometric indices, image classification

Procedia PDF Downloads 248
3027 Comparison Of Virtual Non-Contrast To True Non-Contrast Images Using Dual Layer Spectral Computed Tomography

Authors: O’Day Luke

Abstract:

Purpose: To validate virtual non-contrast reconstructions generated from dual-layer spectral computed tomography (DL-CT) data as an alternative for the acquisition of a dedicated true non-contrast dataset during multiphase contrast studies. Material and methods: Thirty-three patients underwent a routine multiphase clinical CT examination, using Dual-Layer Spectral CT, from March to August 2021. True non-contrast (TNC) and virtual non-contrast (VNC) datasets, generated from both portal venous and arterial phase imaging were evaluated. For every patient in both true and virtual non-contrast datasets, a region-of-interest (ROI) was defined in aorta, liver, fluid (i.e. gallbladder, urinary bladder), kidney, muscle, fat and spongious bone, resulting in 693 ROIs. Differences in attenuation for VNC and TNV images were compared, both separately and combined. Consistency between VNC reconstructions obtained from the arterial and portal venous phase was evaluated. Results: Comparison of CT density (HU) on the VNC and TNC images showed a high correlation. The mean difference between TNC and VNC images (excluding bone results) was 5.5 ± 9.1 HU and > 90% of all comparisons showed a difference of less than 15 HU. For all tissues but spongious bone, the mean absolute difference between TNC and VNC images was below 10 HU. VNC images derived from the arterial and the portal venous phase showed a good correlation in most tissue types. The aortic attenuation was somewhat dependent however on which dataset was used for reconstruction. Bone evaluation with VNC datasets continues to be a problem, as spectral CT algorithms are currently poor in differentiating bone and iodine. Conclusion: Given the increasing availability of DL-CT and proven accuracy of virtual non-contrast processing, VNC is a promising tool for generating additional data during routine contrast-enhanced studies. This study shows the utility of virtual non-contrast scans as an alternative for true non-contrast studies during multiphase CT, with potential for dose reduction, without loss of diagnostic information.

Keywords: dual-layer spectral computed tomography, virtual non-contrast, true non-contrast, clinical comparison

Procedia PDF Downloads 121
3026 Optimal and Best Timing for Capturing Satellite Thermal Images of Concrete Object

Authors: Toufic Abd El-Latif Sadek

Abstract:

The concrete object represents the concrete areas, like buildings. The best, easy, and efficient extraction of the concrete object from satellite thermal images occurred at specific times during the days of the year, by preventing the gaps in times which give the close and same brightness from different objects. Thus, to achieve the best original data which is the aim of the study and then better extraction of the concrete object and then better analysis. The study was done using seven sample objects, asphalt, concrete, metal, rock, dry soil, vegetation, and water, located at one place carefully investigated in a way that all the objects achieve the homogeneous in acquired data at the same time and same weather conditions. The samples of the objects were on the roof of building at position taking by global positioning system (GPS) which its geographical coordinates is: Latitude= 33 degrees 37 minutes, Longitude= 35 degrees 28 minutes, Height= 600 m. It has been found that the first choice and the best time in February is at 2:00 pm, in March at 4 pm, in April and may at 12 pm, in August at 5:00 pm, in October at 11:00 am. The best time in June and November is at 2:00 pm.

Keywords: best timing, concrete areas, optimal, satellite thermal images

Procedia PDF Downloads 330
3025 Performance Evaluation of Various Segmentation Techniques on MRI of Brain Tissue

Authors: U.V. Suryawanshi, S.S. Chowhan, U.V Kulkarni

Abstract:

Accuracy of segmentation methods is of great importance in brain image analysis. Tissue classification in Magnetic Resonance brain images (MRI) is an important issue in the analysis of several brain dementias. This paper portraits performance of segmentation techniques that are used on Brain MRI. A large variety of algorithms for segmentation of Brain MRI has been developed. The objective of this paper is to perform a segmentation process on MR images of the human brain, using Fuzzy c-means (FCM), Kernel based Fuzzy c-means clustering (KFCM), Spatial Fuzzy c-means (SFCM) and Improved Fuzzy c-means (IFCM). The review covers imaging modalities, MRI and methods for noise reduction and segmentation approaches. All methods are applied on MRI brain images which are degraded by salt-pepper noise demonstrate that the IFCM algorithm performs more robust to noise than the standard FCM algorithm. We conclude with a discussion on the trend of future research in brain segmentation and changing norms in IFCM for better results.

Keywords: image segmentation, preprocessing, MRI, FCM, KFCM, SFCM, IFCM

Procedia PDF Downloads 302
3024 Segmentation of Liver Using Random Forest Classifier

Authors: Gajendra Kumar Mourya, Dinesh Bhatia, Akash Handique, Sunita Warjri, Syed Achaab Amir

Abstract:

Nowadays, Medical imaging has become an integral part of modern healthcare. Abdominal CT images are an invaluable mean for abdominal organ investigation and have been widely studied in the recent years. Diagnosis of liver pathologies is one of the major areas of current interests in the field of medical image processing and is still an open problem. To deeply study and diagnose the liver, segmentation of liver is done to identify which part of the liver is mostly affected. Manual segmentation of the liver in CT images is time-consuming and suffers from inter- and intra-observer differences. However, automatic or semi-automatic computer aided segmentation of the Liver is a challenging task due to inter-patient Liver shape and size variability. In this paper, we present a technique for automatic segmenting the liver from CT images using Random Forest Classifier. Random forests or random decision forests are an ensemble learning method for classification that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes of the individual trees. After comparing with various other techniques, it was found that Random Forest Classifier provide a better segmentation results with respect to accuracy and speed. We have done the validation of our results using various techniques and it shows above 89% accuracy in all the cases.

Keywords: CT images, image validation, random forest, segmentation

Procedia PDF Downloads 287
3023 Census and Mapping of Oil Palms Over Satellite Dataset Using Deep Learning Model

Authors: Gholba Niranjan Dilip, Anil Kumar

Abstract:

Conduct of accurate reliable mapping of oil palm plantations and census of individual palm trees is a huge challenge. This study addresses this challenge and developed an optimized solution implemented deep learning techniques on remote sensing data. The oil palm is a very important tropical crop. To improve its productivity and land management, it is imperative to have accurate census over large areas. Since, manual census is costly and prone to approximations, a methodology for automated census using panchromatic images from Cartosat-2, SkySat and World View-3 satellites is demonstrated. It is selected two different study sites in Indonesia. The customized set of training data and ground-truth data are created for this study from Cartosat-2 images. The pre-trained model of Single Shot MultiBox Detector (SSD) Lite MobileNet V2 Convolutional Neural Network (CNN) from the TensorFlow Object Detection API is subjected to transfer learning on this customized dataset. The SSD model is able to generate the bounding boxes for each oil palm and also do the counting of palms with good accuracy on the panchromatic images. The detection yielded an F-Score of 83.16 % on seven different images. The detections are buffered and dissolved to generate polygons demarcating the boundaries of the oil palm plantations. This provided the area under the plantations and also gave maps of their location, thereby completing the automated census, with a fairly high accuracy (≈100%). The trained CNN was found competent enough to detect oil palm crowns from images obtained from multiple satellite sensors and of varying temporal vintage. It helped to estimate the increase in oil palm plantations from 2014 to 2021 in the study area. The study proved that high-resolution panchromatic satellite image can successfully be used to undertake census of oil palm plantations using CNNs.

Keywords: object detection, oil palm tree census, panchromatic images, single shot multibox detector

Procedia PDF Downloads 142
3022 Tree Species Classification Using Effective Features of Polarimetric SAR and Hyperspectral Images

Authors: Milad Vahidi, Mahmod R. Sahebi, Mehrnoosh Omati, Reza Mohammadi

Abstract:

Forest management organizations need information to perform their work effectively. Remote sensing is an effective method to acquire information from the Earth. Two datasets of remote sensing images were used to classify forested regions. Firstly, all of extractable features from hyperspectral and PolSAR images were extracted. The optical features were spectral indexes related to the chemical, water contents, structural indexes, effective bands and absorption features. Also, PolSAR features were the original data, target decomposition components, and SAR discriminators features. Secondly, the particle swarm optimization (PSO) and the genetic algorithms (GA) were applied to select optimization features. Furthermore, the support vector machine (SVM) classifier was used to classify the image. The results showed that the combination of PSO and SVM had higher overall accuracy than the other cases. This combination provided overall accuracy about 90.56%. The effective features were the spectral index, the bands in shortwave infrared (SWIR) and the visible ranges and certain PolSAR features.

Keywords: hyperspectral, PolSAR, feature selection, SVM

Procedia PDF Downloads 394
3021 Analysis of Land Use, Land Cover Changes in Damaturu, Nigeria: Using Satellite Images

Authors: Isa Muhammad Zumo, Musa Lawan

Abstract:

This study analyzes the land use/land cover changes in Damaturu metropolis from 1986 to 2005. LandSat TM Images of 1986, 1999, and 2005 were used. Built-up lands, agric lands, water body and other lands were created as themes within ILWIS 3.4 software. The images were displayed in False Colour Composite (FCC) for a better visualization and identification of the themes created. Training sample sets were collected based on the ground truth data during field the checks. Statistical data were then extracted from the classified sample set. Area in hectares for each theme was calculated for each year and the result for each land use/land cover types for each study year was compared. From the result, it was found out that built-up areas have a considerable increase from 37.71 hectares in 1986 to 1062.72 hectares in 2005. It has an annual increase rate of approximately 0.34%. The results also reveal that there is a decrease of 5829.66 hectares of other lands (vacant lands) from 1986 to 2005.

Keywords: land use, changes, analysis, environmental pollution

Procedia PDF Downloads 317
3020 Determining Earthquake Performances of Existing Reinforced Concrete Buildings by Using ANN

Authors: Musa H. Arslan, Murat Ceylan, Tayfun Koyuncu

Abstract:

In this study, an artificial intelligence-based (ANN based) analytical method has been developed for analyzing earthquake performances of the reinforced concrete (RC) buildings. 66 RC buildings with four to ten storeys were subjected to performance analysis according to the parameters which are the existing material, loading and geometrical characteristics of the buildings. The selected parameters have been thought to be effective on the performance of RC buildings. In the performance analyses stage of the study, level of performance possible to be shown by these buildings in case of an earthquake was determined on the basis of the 4-grade performance levels specified in Turkish Earthquake Code- 2007 (TEC-2007). After obtaining the 4-grade performance level, selected 23 parameters of each building have been matched with the performance level. In this stage, ANN-based fast evaluation algorithm mentioned above made an economic and rapid evaluation of four to ten storey RC buildings. According to the study, the prediction accuracy of ANN has been found about 74%.

Keywords: artificial intelligence, earthquake, performance, reinforced concrete

Procedia PDF Downloads 443
3019 Environmental Decision Making Model for Assessing On-Site Performances of Building Subcontractors

Authors: Buket Metin

Abstract:

Buildings cause a variety of loads on the environment due to activities performed at each stage of the building life cycle. Construction is the first stage that affects both the natural and built environments at different steps of the process, which can be defined as transportation of materials within the construction site, formation and preparation of materials on-site and the application of materials to realize the building subsystems. All of these steps require the use of technology, which varies based on the facilities that contractors and subcontractors have. Hence, environmental consequences of the construction process should be tackled by focusing on construction technology options used in every step of the process. This paper presents an environmental decision-making model for assessing on-site performances of subcontractors based on the construction technology options which they can supply. First, construction technologies, which constitute information, tools and methods, are classified. Then, environmental performance criteria are set forth related to resource consumption, ecosystem quality, and human health issues. Finally, the model is developed based on the relationships between the construction technology components and the environmental performance criteria. The Fuzzy Analytical Hierarchy Process (FAHP) method is used for weighting the environmental performance criteria according to environmental priorities of decision-maker(s), while the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method is used for ranking on-site environmental performances of subcontractors using quantitative data related to the construction technology components. Thus, the model aims to provide an insight to decision-maker(s) about the environmental consequences of the construction process and to provide an opportunity to improve the overall environmental performance of construction sites.

Keywords: construction process, construction technology, decision making, environmental performance, subcontractor

Procedia PDF Downloads 223
3018 Automatic Identification and Monitoring of Wildlife via Computer Vision and IoT

Authors: Bilal Arshad, Johan Barthelemy, Elliott Pilton, Pascal Perez

Abstract:

Getting reliable, informative, and up-to-date information about the location, mobility, and behavioural patterns of animals will enhance our ability to research and preserve biodiversity. The fusion of infra-red sensors and camera traps offers an inexpensive way to collect wildlife data in the form of images. However, extracting useful data from these images, such as the identification and counting of animals remains a manual, time-consuming, and costly process. In this paper, we demonstrate that such information can be automatically retrieved by using state-of-the-art deep learning methods. Another major challenge that ecologists are facing is the recounting of one single animal multiple times due to that animal reappearing in other images taken by the same or other camera traps. Nonetheless, such information can be extremely useful for tracking wildlife and understanding its behaviour. To tackle the multiple count problem, we have designed a meshed network of camera traps, so they can share the captured images along with timestamps, cumulative counts, and dimensions of the animal. The proposed method takes leverage of edge computing to support real-time tracking and monitoring of wildlife. This method has been validated in the field and can be easily extended to other applications focusing on wildlife monitoring and management, where the traditional way of monitoring is expensive and time-consuming.

Keywords: computer vision, ecology, internet of things, invasive species management, wildlife management

Procedia PDF Downloads 116
3017 Machine Learning for Disease Prediction Using Symptoms and X-Ray Images

Authors: Ravija Gunawardana, Banuka Athuraliya

Abstract:

Machine learning has emerged as a powerful tool for disease diagnosis and prediction. The use of machine learning algorithms has the potential to improve the accuracy of disease prediction, thereby enabling medical professionals to provide more effective and personalized treatments. This study focuses on developing a machine-learning model for disease prediction using symptoms and X-ray images. The importance of this study lies in its potential to assist medical professionals in accurately diagnosing diseases, thereby improving patient outcomes. Respiratory diseases are a significant cause of morbidity and mortality worldwide, and chest X-rays are commonly used in the diagnosis of these diseases. However, accurately interpreting X-ray images requires significant expertise and can be time-consuming, making it difficult to diagnose respiratory diseases in a timely manner. By incorporating machine learning algorithms, we can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The study utilized the Mask R-CNN algorithm, which is a state-of-the-art method for object detection and segmentation in images, to process chest X-ray images. The model was trained and tested on a large dataset of patient information, which included both symptom data and X-ray images. The performance of the model was evaluated using a range of metrics, including accuracy, precision, recall, and F1-score. The results showed that the model achieved an accuracy rate of over 90%, indicating that it was able to accurately detect and segment regions of interest in the X-ray images. In addition to X-ray images, the study also incorporated symptoms as input data for disease prediction. The study used three different classifiers, namely Random Forest, K-Nearest Neighbor and Support Vector Machine, to predict diseases based on symptoms. These classifiers were trained and tested using the same dataset of patient information as the X-ray model. The results showed promising accuracy rates for predicting diseases using symptoms, with the ensemble learning techniques significantly improving the accuracy of disease prediction. The study's findings indicate that the use of machine learning algorithms can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The model developed in this study has the potential to assist medical professionals in diagnosing respiratory diseases more accurately and efficiently. However, it is important to note that the accuracy of the model can be affected by several factors, including the quality of the X-ray images, the size of the dataset used for training, and the complexity of the disease being diagnosed. In conclusion, the study demonstrated the potential of machine learning algorithms for disease prediction using symptoms and X-ray images. The use of these algorithms can improve the accuracy of disease diagnosis, ultimately leading to better patient care. Further research is needed to validate the model's accuracy and effectiveness in a clinical setting and to expand its application to other diseases.

Keywords: K-nearest neighbor, mask R-CNN, random forest, support vector machine

Procedia PDF Downloads 114
3016 Numerical Implementation and Testing of Fractioning Estimator Method for the Box-Counting Dimension of Fractal Objects

Authors: Abraham Terán Salcedo, Didier Samayoa Ochoa

Abstract:

This work presents a numerical implementation of a method for estimating the box-counting dimension of self-avoiding curves on a planar space, fractal objects captured on digital images; this method is named fractioning estimator. Classical methods of digital image processing, such as noise filtering, contrast manipulation, and thresholding, among others, are used in order to obtain binary images that are suitable for performing the necessary computations of the fractioning estimator. A user interface is developed for performing the image processing operations and testing the fractioning estimator on different captured images of real-life fractal objects. To analyze the results, the estimations obtained through the fractioning estimator are compared to the results obtained through other methods that are already implemented on different available software for computing and estimating the box-counting dimension.

Keywords: box-counting, digital image processing, fractal dimension, numerical method

Procedia PDF Downloads 56
3015 A Comprehensive Study of Camouflaged Object Detection Using Deep Learning

Authors: Khalak Bin Khair, Saqib Jahir, Mohammed Ibrahim, Fahad Bin, Debajyoti Karmaker

Abstract:

Object detection is a computer technology that deals with searching through digital images and videos for occurrences of semantic elements of a particular class. It is associated with image processing and computer vision. On top of object detection, we detect camouflage objects within an image using Deep Learning techniques. Deep learning may be a subset of machine learning that's essentially a three-layer neural network Over 6500 images that possess camouflage properties are gathered from various internet sources and divided into 4 categories to compare the result. Those images are labeled and then trained and tested using vgg16 architecture on the jupyter notebook using the TensorFlow platform. The architecture is further customized using Transfer Learning. Methods for transferring information from one or more of these source tasks to increase learning in a related target task are created through transfer learning. The purpose of this transfer of learning methodologies is to aid in the evolution of machine learning to the point where it is as efficient as human learning.

Keywords: deep learning, transfer learning, TensorFlow, camouflage, object detection, architecture, accuracy, model, VGG16

Procedia PDF Downloads 112
3014 The Role of Privatization on the Formulation of Productive Supply Chain: The Case of Ethiopian Firms

Authors: Merhawit Fisseha Gebremariam, Yohannes Yebabe Tesfay

Abstract:

This study focuses on the formulation of a sustainable, effective, and efficient supply chain strategy framework that will enable Ethiopian privatized firms. The study examined the role of privatization in productive sourcing, production, and delivery to Ethiopian firm’s performances. To analyze our hypothesis, the authors applied the concepts of Key Performance Indicator (KPI), strategic outsourcing, purchasing portfolio analysis, and Porter's marketing analysis. The authors selected ten privatized companies and compared their financial, market expansion, and sustainability performances. The Chi-Square Test showed that at the 5% level of significance, privatization and outsourcing activities can assist the business performances of Ethiopian firms in terms of product promotion and new market expansion. At the 5% level of significance, the independent t-test result showed that firms that were privatized by Ethiopian investors showed stronger financial performance than those that were privatized by foreign investors. Furthermore, it is better if Ethiopian firms apply both cost leadership and differentiated strategy to enhance thriving in their business area. Ethiopian firms need to implement the supply chain operations reference (SCOR) model for an exclusive framework that supports communication links the supply chain partners, and enhances productivity. The government of Ethiopia should be aware that the privatization of firms by Ethiopian investors will strengthen the economy. Otherwise, the privatization process will be risky for the country, and therefore, the government of Ethiopia should stop doing those activities.

Keywords: correlation analysis, market strategies, KPIs, privatization, risk and Ethiopia

Procedia PDF Downloads 37
3013 Dynamic Web-Based 2D Medical Image Visualization and Processing Software

Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail

Abstract:

In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.

Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN

Procedia PDF Downloads 141
3012 Human Gait Recognition Using Moment with Fuzzy

Authors: Jyoti Bharti, Navneet Manjhi, M. K.Gupta, Bimi Jain

Abstract:

A reliable gait features are required to extract the gait sequences from an images. In this paper suggested a simple method for gait identification which is based on moments. Moment values are extracted on different number of frames of gray scale and silhouette images of CASIA database. These moment values are considered as feature values. Fuzzy logic and nearest neighbour classifier are used for classification. Both achieved higher recognition.

Keywords: gait, fuzzy logic, nearest neighbour, recognition rate, moments

Procedia PDF Downloads 726
3011 Metareasoning Image Optimization Q-Learning

Authors: Mahasa Zahirnia

Abstract:

The purpose of this paper is to explore new and effective ways of optimizing satellite images using artificial intelligence, and the process of implementing reinforcement learning to enhance the quality of data captured within the image. In our implementation of Bellman's Reinforcement Learning equations, associated state diagrams, and multi-stage image processing, we were able to enhance image quality, detect and define objects. Reinforcement learning is the differentiator in the area of artificial intelligence, and Q-Learning relies on trial and error to achieve its goals. The reward system that is embedded in Q-Learning allows the agent to self-evaluate its performance and decide on the best possible course of action based on the current and future environment. Results show that within a simulated environment, built on the images that are commercially available, the rate of detection was 40-90%. Reinforcement learning through Q-Learning algorithm is not just desired but required design criteria for image optimization and enhancements. The proposed methods presented are a cost effective method of resolving uncertainty of the data because reinforcement learning finds ideal policies to manage the process using a smaller sample of images.

Keywords: Q-learning, image optimization, reinforcement learning, Markov decision process

Procedia PDF Downloads 190