Search results for: clustering images
1847 Simulation Approach for a Comparison of Linked Cluster Algorithm and Clusterhead Size Algorithm in Ad Hoc Networks
Authors: Ameen Jameel Alawneh
Abstract:
A Mobile ad-hoc network (MANET) is a collection of wireless mobile hosts that dynamically form a temporary network without the aid of a system administrator. It has neither fixed infrastructure nor wireless ad hoc sessions. It inherently reaches several nodes with a single transmission, and each node functions as both a host and a router. The network maybe represented as a set of clusters each managed by clusterhead. The cluster size is not fixed and it depends on the movement of nodes. We proposed a clusterhead size algorithm (CHSize). This clustering algorithm can be used by several routing algorithms for ad hoc networks. An elected clusterhead is assigned for communication with all other clusters. Analysis and simulation of the algorithm has been implemented using GloMoSim networks simulator, MATLAB and MAPL11 proved that the proposed algorithm achieves the goals.Keywords: simulation, MANET, Ad-hoc, cluster head size, linked cluster algorithm, loss and dropped packets
Procedia PDF Downloads 3921846 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors
Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin
Abstract:
IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)
Procedia PDF Downloads 1391845 Magnetic Resonance Imaging for Assessment of the Quadriceps Tendon Cross-Sectional Area as an Adjunctive Diagnostic Parameter in Patients with Patellofemoral Pain Syndrome
Authors: Jae Ni Jang, SoYoon Park, Sukhee Park, Yumin Song, Jae Won Kim, Keum Nae Kang, Young Uk Kim
Abstract:
Objectives: Patellofemoral pain syndrome (PFPS) is a common clinical condition characterized by anterior knee pain. Here, we investigated the quadriceps tendon cross-sectional area (QTCSA) as a novel predictor for the diagnosis of PFPS. By examining the association between the QTCSA and PFPS, we aimed to provide a more valuable diagnostic parameter and more equivocal assessment of the diagnostic potential of PFPS by comparing the QTCSA with the quadriceps tendon thickness (QTT), a traditional measure of quadriceps tendon hypertrophy. Patients and Methods: This retrospective study included 30 patients with PFPS and 30 healthy participants who underwent knee magnetic resonance imaging. T1-weighted turbo spin echo transverse magnetic resonance images were obtained. The QTCSA was measured on the axial-angled phases of the images by drawing outlines, and the QTT was measured at the most hypertrophied quadriceps tendon. Results: The average QTT and QTCSA for patients with PFPS (6.33±0.80 mm and 155.77±36.60 mm², respectively) were significantly greater than those for healthy participants (5.77±0.36 mm and 111.90±24.10 mm2, respectively; both P<0.001). We used a receiver operating characteristic curve to confirm the sensitivities and specificities for both the QTT and QTCSA as predictors of PFPS. The optimal diagnostic cutoff value for QTT was 5.98 mm, with a sensitivity of 66.7%, a specificity of 70.0%, and an area under the curve of 0.75 (0.62–0.88). The optimal diagnostic cutoff value for QTCSA was 121.04 mm², with a sensitivity of 73.3%, a specificity of 70.0%, and an area under the curve of 0.83 (0.74–0.93). Conclusion: The QTCSA was found to be a more reliable diagnostic indicator for PFPS than QTT.Keywords: patellofemoral pain syndrome, quadriceps muscle, hypertrophy, magnetic resonance imaging
Procedia PDF Downloads 511844 A U-Net Based Architecture for Fast and Accurate Diagram Extraction
Authors: Revoti Prasad Bora, Saurabh Yadav, Nikita Katyal
Abstract:
In the context of educational data mining, the use case of extracting information from images containing both text and diagrams is of high importance. Hence, document analysis requires the extraction of diagrams from such images and processes the text and diagrams separately. To the author’s best knowledge, none among plenty of approaches for extracting tables, figures, etc., suffice the need for real-time processing with high accuracy as needed in multiple applications. In the education domain, diagrams can be of varied characteristics viz. line-based i.e. geometric diagrams, chemical bonds, mathematical formulas, etc. There are two broad categories of approaches that try to solve similar problems viz. traditional computer vision based approaches and deep learning approaches. The traditional computer vision based approaches mainly leverage connected components and distance transform based processing and hence perform well in very limited scenarios. The existing deep learning approaches either leverage YOLO or faster-RCNN architectures. These approaches suffer from a performance-accuracy tradeoff. This paper proposes a U-Net based architecture that formulates the diagram extraction as a segmentation problem. The proposed method provides similar accuracy with a much faster extraction time as compared to the mentioned state-of-the-art approaches. Further, the segmentation mask in this approach allows the extraction of diagrams of irregular shapes.Keywords: computer vision, deep-learning, educational data mining, faster-RCNN, figure extraction, image segmentation, real-time document analysis, text extraction, U-Net, YOLO
Procedia PDF Downloads 1371843 Connecting MRI Physics to Glioma Microenvironment: Comparing Simulated T2-Weighted MRI Models of Fixed and Expanding Extracellular Space
Authors: Pamela R. Jackson, Andrea Hawkins-Daarud, Cassandra R. Rickertsen, Kamala Clark-Swanson, Scott A. Whitmire, Kristin R. Swanson
Abstract:
Glioblastoma Multiforme (GBM), the most common primary brain tumor, often presents with hyperintensity on T2-weighted or T2-weighted fluid attenuated inversion recovery (T2/FLAIR) magnetic resonance imaging (MRI). This hyperintensity corresponds with vasogenic edema, however there are likely many infiltrating tumor cells within the hyperintensity as well. While MRIs do not directly indicate tumor cells, MRIs do reflect the microenvironmental water abnormalities caused by the presence of tumor cells and edema. The inherent heterogeneity and resulting MRI features of GBMs complicate assessing disease response. To understand how hyperintensity on T2/FLAIR MRI may correlate with edema in the extracellular space (ECS), a multi-compartmental MRI signal equation which takes into account tissue compartments and their associated volumes with input coming from a mathematical model of glioma growth that incorporates edema formation was explored. The reasonableness of two possible extracellular space schema was evaluated by varying the T2 of the edema compartment and calculating the possible resulting T2s in tumor and peripheral edema. In the mathematical model, gliomas were comprised of vasculature and three tumor cellular phenotypes: normoxic, hypoxic, and necrotic. Edema was characterized as fluid leaking from abnormal tumor vessels. Spatial maps of tumor cell density and edema for virtual tumors were simulated with different rates of proliferation and invasion and various ECS expansion schemes. These spatial maps were then passed into a multi-compartmental MRI signal model for generating simulated T2/FLAIR MR images. Individual compartments’ T2 values in the signal equation were either from literature or estimated and the T2 for edema specifically was varied over a wide range (200 ms – 9200 ms). T2 maps were calculated from simulated images. T2 values based on simulated images were evaluated for regions of interest (ROIs) in normal appearing white matter, tumor, and peripheral edema. The ROI T2 values were compared to T2 values reported in literature. The expanding scheme of extracellular space is had T2 values similar to the literature calculated values. The static scheme of extracellular space had a much lower T2 values and no matter what T2 was associated with edema, the intensities did not come close to literature values. Expanding the extracellular space is necessary to achieve simulated edema intensities commiserate with acquired MRIs.Keywords: extracellular space, glioblastoma multiforme, magnetic resonance imaging, mathematical modeling
Procedia PDF Downloads 2351842 Effect of Depth on Texture Features of Ultrasound Images
Authors: M. A. Alqahtani, D. P. Coleman, N. D. Pugh, L. D. M. Nokes
Abstract:
In diagnostic ultrasound, the echo graphic B-scan texture is an important area of investigation since it can be analyzed to characterize the histological state of internal tissues. An important factor requiring consideration when evaluating ultrasonic tissue texture is the depth. The effect of attenuation with depth of ultrasound, the size of the region of interest, gain, and dynamic range are important variables to consider as they can influence the analysis of texture features. These sources of variability have to be considered carefully when evaluating image texture as different settings might influence the resultant image. The aim of this study is to investigate the effect of depth on the texture features in-vivo using a 3D ultrasound probe. The left leg medial head of the gastrocnemius muscle of 10 healthy subjects were scanned. Two regions A and B were defined at different depth within the gastrocnemius muscle boundary. The size of both ROI’s was 280*20 pixels and the distance between region A and B was kept constant at 5 mm. Texture parameters include gray level, variance, skewness, kurtosis, co-occurrence matrix; run length matrix, gradient, autoregressive (AR) model and wavelet transform were extracted from the images. The paired t –test was used to test the depth effect for the normally distributed data and the Wilcoxon–Mann-Whitney test was used for the non-normally distributed data. The gray level, variance, and run length matrix were significantly lowered when the depth increased. The other texture parameters showed similar values at different depth. All the texture parameters showed no significant difference between depths A and B (p > 0.05) except for gray level, variance and run length matrix (p < 0.05). This indicates that gray level, variance, and run length matrix are depth dependent.Keywords: ultrasound image, texture parameters, computational biology, biomedical engineering
Procedia PDF Downloads 2951841 Modeling Breathable Particulate Matter Concentrations over Mexico City Retrieved from Landsat 8 Satellite Imagery
Authors: Rodrigo T. Sepulveda-Hirose, Ana B. Carrera-Aguilar, Magnolia G. Martinez-Rivera, Pablo de J. Angeles-Salto, Carlos Herrera-Ventosa
Abstract:
In order to diminish health risks, it is of major importance to monitor air quality. However, this process is accompanied by the high costs of physical and human resources. In this context, this research is carried out with the main objective of developing a predictive model for concentrations of inhalable particles (PM10-2.5) using remote sensing. To develop the model, satellite images, mainly from Landsat 8, of the Mexico City’s Metropolitan Area were used. Using historical PM10 and PM2.5 measurements of the RAMA (Automatic Environmental Monitoring Network of Mexico City) and through the processing of the available satellite images, a preliminary model was generated in which it was possible to observe critical opportunity areas that will allow the generation of a robust model. Through the preliminary model applied to the scenes of Mexico City, three areas were identified that cause great interest due to the presumed high concentration of PM; the zones are those that present high plant density, bodies of water and soil without constructions or vegetation. To date, work continues on this line to improve the preliminary model that has been proposed. In addition, a brief analysis was made of six models, presented in articles developed in different parts of the world, this in order to visualize the optimal bands for the generation of a suitable model for Mexico City. It was found that infrared bands have helped to model in other cities, but the effectiveness that these bands could provide for the geographic and climatic conditions of Mexico City is still being evaluated.Keywords: air quality, modeling pollution, particulate matter, remote sensing
Procedia PDF Downloads 1551840 Performance Prediction Methodology of Slow Aging Assets
Authors: M. Ben Slimene, M.-S. Ouali
Abstract:
Asset management of urban infrastructures faces a multitude of challenges that need to be overcome to obtain a reliable measurement of performances. Predicting the performance of slowly aging systems is one of those challenges, which helps the asset manager to investigate specific failure modes and to undertake the appropriate maintenance and rehabilitation interventions to avoid catastrophic failures as well as to optimize the maintenance costs. This article presents a methodology for modeling the deterioration of slowly degrading assets based on an operating history. It consists of extracting degradation profiles by grouping together assets that exhibit similar degradation sequences using an unsupervised classification technique derived from artificial intelligence. The obtained clusters are used to build the performance prediction models. This methodology is applied to a sample of a stormwater drainage culvert dataset.Keywords: artificial Intelligence, clustering, culvert, regression model, slow degradation
Procedia PDF Downloads 1121839 Unsupervised Learning with Self-Organizing Maps for Named Entity Recognition in the CONLL2003 Dataset
Authors: Assel Jaxylykova, Alexnder Pak
Abstract:
This study utilized a Self-Organizing Map (SOM) for unsupervised learning on the CONLL-2003 dataset for Named Entity Recognition (NER). The process involved encoding words into 300-dimensional vectors using FastText. These vectors were input into a SOM grid, where training adjusted node weights to minimize distances. The SOM provided a topological representation for identifying and clustering named entities, demonstrating its efficacy without labeled examples. Results showed an F1-measure of 0.86, highlighting SOM's viability. Although some methods achieve higher F1 measures, SOM eliminates the need for labeled data, offering a scalable and efficient alternative. The SOM's ability to uncover hidden patterns provides insights that could enhance existing supervised methods. Further investigation into potential limitations and optimization strategies is suggested to maximize benefits.Keywords: named entity recognition, natural language processing, self-organizing map, CONLL-2003, semantics
Procedia PDF Downloads 481838 A Survey of Skin Cancer Detection and Classification from Skin Lesion Images Using Deep Learning
Authors: Joseph George, Anne Kotteswara Roa
Abstract:
Skin disease is one of the most common and popular kinds of health issues faced by people nowadays. Skin cancer (SC) is one among them, and its detection relies on the skin biopsy outputs and the expertise of the doctors, but it consumes more time and some inaccurate results. At the early stage, skin cancer detection is a challenging task, and it easily spreads to the whole body and leads to an increase in the mortality rate. Skin cancer is curable when it is detected at an early stage. In order to classify correct and accurate skin cancer, the critical task is skin cancer identification and classification, and it is more based on the cancer disease features such as shape, size, color, symmetry and etc. More similar characteristics are present in many skin diseases; hence it makes it a challenging issue to select important features from a skin cancer dataset images. Hence, the skin cancer diagnostic accuracy is improved by requiring an automated skin cancer detection and classification framework; thereby, the human expert’s scarcity is handled. Recently, the deep learning techniques like Convolutional neural network (CNN), Deep belief neural network (DBN), Artificial neural network (ANN), Recurrent neural network (RNN), and Long and short term memory (LSTM) have been widely used for the identification and classification of skin cancers. This survey reviews different DL techniques for skin cancer identification and classification. The performance metrics such as precision, recall, accuracy, sensitivity, specificity, and F-measures are used to evaluate the effectiveness of SC identification using DL techniques. By using these DL techniques, the classification accuracy increases along with the mitigation of computational complexities and time consumption.Keywords: skin cancer, deep learning, performance measures, accuracy, datasets
Procedia PDF Downloads 1291837 Fully Automated Methods for the Detection and Segmentation of Mitochondria in Microscopy Images
Authors: Blessing Ojeme, Frederick Quinn, Russell Karls, Shannon Quinn
Abstract:
The detection and segmentation of mitochondria from fluorescence microscopy are crucial for understanding the complex structure of the nervous system. However, the constant fission and fusion of mitochondria and image distortion in the background make the task of detection and segmentation challenging. In the literature, a number of open-source software tools and artificial intelligence (AI) methods have been described for analyzing mitochondrial images, achieving remarkable classification and quantitation results. However, the availability of combined expertise in the medical field and AI required to utilize these tools poses a challenge to its full adoption and use in clinical settings. Motivated by the advantages of automated methods in terms of good performance, minimum detection time, ease of implementation, and cross-platform compatibility, this study proposes a fully automated framework for the detection and segmentation of mitochondria using both image shape information and descriptive statistics. Using the low-cost, open-source python and openCV library, the algorithms are implemented in three stages: pre-processing, image binarization, and coarse-to-fine segmentation. The proposed model is validated using the mitochondrial fluorescence dataset. Ground truth labels generated using a Lab kit were also used to evaluate the performance of our detection and segmentation model. The study produces good detection and segmentation results and reports the challenges encountered during the image analysis of mitochondrial morphology from the fluorescence mitochondrial dataset. A discussion on the methods and future perspectives of fully automated frameworks conclude the paper.Keywords: 2D, binarization, CLAHE, detection, fluorescence microscopy, mitochondria, segmentation
Procedia PDF Downloads 3571836 Omni-Modeler: Dynamic Learning for Pedestrian Redetection
Authors: Michael Karnes, Alper Yilmaz
Abstract:
This paper presents the application of the omni-modeler towards pedestrian redetection. The pedestrian redetection task creates several challenges when applying deep neural networks (DNN) due to the variety of pedestrian appearance with camera position, the variety of environmental conditions, and the specificity required to recognize one pedestrian from another. DNNs require significant training sets and are not easily adapted for changes in class appearances or changes in the set of classes held in its knowledge domain. Pedestrian redetection requires an algorithm that can actively manage its knowledge domain as individuals move in and out of the scene, as well as learn individual appearances from a few frames of a video. The Omni-Modeler is a dynamically learning few-shot visual recognition algorithm developed for tasks with limited training data availability. The Omni-Modeler adapts the knowledge domain of pre-trained deep neural networks to novel concepts with a calculated localized language encoder. The Omni-Modeler knowledge domain is generated by creating a dynamic dictionary of concept definitions, which are directly updatable as new information becomes available. Query images are identified through nearest neighbor comparison to the learned object definitions. The study presented in this paper evaluates its performance in re-identifying individuals as they move through a scene in both single-camera and multi-camera tracking applications. The results demonstrate that the Omni-Modeler shows potential for across-camera view pedestrian redetection and is highly effective for single-camera redetection with a 93% accuracy across 30 individuals using 64 example images for each individual.Keywords: dynamic learning, few-shot learning, pedestrian redetection, visual recognition
Procedia PDF Downloads 761835 Exploring the Nature and Meaning of Theory in the Field of Neuroeducation Studies
Authors: Ali Nouri
Abstract:
Neuroeducation is one of the most exciting research fields which is continually evolving. However, there is a need to develop its theoretical bases in connection to practice. The present paper is a starting attempt in this regard to provide a space from which to think about neuroeducational theory and invoke more investigation in this area. Accordingly, a comprehensive theory of neuroeducation could be defined as grouping or clustering of concepts and propositions that describe and explain the nature of human learning to provide valid interpretations and implications useful for educational practice in relation to philosophical aspects or values. Whereas it should be originated from the philosophical foundations of the field and explain its normative significance, it needs to be testable in terms of rigorous evidence to fundamentally advance contemporary educational policy and practice. There is thus pragmatically a need to include a course on neuroeducational theory into the curriculum of the field. In addition, there is a need to articulate and disseminate considerable discussion over the subject within professional journals and academic societies.Keywords: neuroeducation studies, neuroeducational theory, theory building, neuroeducation research
Procedia PDF Downloads 4481834 Lung HRCT Pattern Classification for Cystic Fibrosis Using a Convolutional Neural Network
Authors: Parisa Mansour
Abstract:
Cystic fibrosis (CF) is one of the most common autosomal recessive diseases among whites. It mostly affects the lungs, causing infections and inflammation that account for 90% of deaths in CF patients. Because of this high variability in clinical presentation and organ involvement, investigating treatment responses and evaluating lung changes over time is critical to preventing CF progression. High-resolution computed tomography (HRCT) greatly facilitates the assessment of lung disease progression in CF patients. Recently, artificial intelligence was used to analyze chest CT scans of CF patients. In this paper, we propose a convolutional neural network (CNN) approach to classify CF lung patterns in HRCT images. The proposed network consists of two convolutional layers with 3 × 3 kernels and maximally connected in each layer, followed by two dense layers with 1024 and 10 neurons, respectively. The softmax layer prepares a predicted output probability distribution between classes. This layer has three exits corresponding to the categories of normal (healthy), bronchitis and inflammation. To train and evaluate the network, we constructed a patch-based dataset extracted from more than 1100 lung HRCT slices obtained from 45 CF patients. Comparative evaluation showed the effectiveness of the proposed CNN compared to its close peers. Classification accuracy, average sensitivity and specificity of 93.64%, 93.47% and 96.61% were achieved, indicating the potential of CNNs in analyzing lung CF patterns and monitoring lung health. In addition, the visual features extracted by our proposed method can be useful for automatic measurement and finally evaluation of the severity of CF patterns in lung HRCT images.Keywords: HRCT, CF, cystic fibrosis, chest CT, artificial intelligence
Procedia PDF Downloads 651833 COVID-19 Detection from Computed Tomography Images Using UNet Segmentation, Region Extraction, and Classification Pipeline
Authors: Kenan Morani, Esra Kaya Ayana
Abstract:
This study aimed to develop a novel pipeline for COVID-19 detection using a large and rigorously annotated database of computed tomography (CT) images. The pipeline consists of UNet-based segmentation, lung extraction, and a classification part, with the addition of optional slice removal techniques following the segmentation part. In this work, a batch normalization was added to the original UNet model to produce lighter and better localization, which is then utilized to build a full pipeline for COVID-19 diagnosis. To evaluate the effectiveness of the proposed pipeline, various segmentation methods were compared in terms of their performance and complexity. The proposed segmentation method with batch normalization outperformed traditional methods and other alternatives, resulting in a higher dice score on a publicly available dataset. Moreover, at the slice level, the proposed pipeline demonstrated high validation accuracy, indicating the efficiency of predicting 2D slices. At the patient level, the full approach exhibited higher validation accuracy and macro F1 score compared to other alternatives, surpassing the baseline. The classification component of the proposed pipeline utilizes a convolutional neural network (CNN) to make final diagnosis decisions. The COV19-CT-DB dataset, which contains a large number of CT scans with various types of slices and rigorously annotated for COVID-19 detection, was utilized for classification. The proposed pipeline outperformed many other alternatives on the dataset.Keywords: classification, computed tomography, lung extraction, macro F1 score, UNet segmentation
Procedia PDF Downloads 1311832 Aspects and Studies of Fractal Geometry in Automatic Breast Cancer Detection
Authors: Mrinal Kanti Bhowmik, Kakali Das Jr., Barin Kumar De, Debotosh Bhattacharjee
Abstract:
Breast cancer is the most common cancer and a leading cause of death for women in the 35 to 55 age group. Early detection of breast cancer can decrease the mortality rate of breast cancer. Mammography is considered as a ‘Gold Standard’ for breast cancer detection and a very popular modality, presently used for breast cancer screening and detection. The screening of digital mammograms often leads to over diagnosis and a consequence to unnecessary traumatic & painful biopsies. For that reason recent studies involving the use of thermal imaging as a screening technique have generated a growing interest especially in cases where the mammography is limited, as in young patients who have dense breast tissue. Tumor is a significant sign of breast cancer in both mammography and thermography. The tumors are complex in structure and they also exhibit a different statistical and textural features compared to the breast background tissue. Fractal geometry is a geometry which is used to describe this type of complex structure as per their main characteristic, where traditional Euclidean geometry fails. Over the last few years, fractal geometrics have been applied mostly in many medical image (1D, 2D, or 3D) analysis applications. In breast cancer detection using digital mammogram images, also it plays a significant role. Fractal is also used in thermography for early detection of the masses using the thermal texture. This paper presents an overview of the recent aspects and initiatives of fractals in breast cancer detection in both mammography and thermography. The scope of fractal geometry in automatic breast cancer detection using digital mammogram and thermogram images are analysed, which forms a foundation for further study on application of fractal geometry in medical imaging for improving the efficiency of automatic detection.Keywords: fractal, tumor, thermography, mammography
Procedia PDF Downloads 3881831 Evaluation of the Urban Landscape Structures and Dynamics of Hawassa City, Using Satellite Images and Spatial Metrics Approaches, Ethiopia
Authors: Berhanu Terfa, Nengcheng C.
Abstract:
The study deals with the analysis of urban expansion and land transformation of Hawass City using remote sensing data and landscape metrics during last three decades (1987–2017). Remote sensing data from Various multi-temporal satellite images viz., TM (1987), TM (1995), ETM+ (2005) and OLI (2017) were used to examine the urban expansion, growth types, and spatial isolation within the urban landscape to develop an understanding the trends of built-up growth in Hawassa City, Ethiopia. Landscape metrics and built-up density were employed to analyze the pattern, process and overall growth status. The area under investigation was divided into concentric circles with a consecutive circle of 1 km incremental radius from the central pixel (Central Business District) for analysis. The result exhibited that the built-up area had increased by 541.32% between 1987 and 2017and an extension growth types (more than 67 %) was observed. The major growth took place in north-west direction followed by north direction in haphazard manner during 1987–1995 period, whereas predominant built-up development was observed in south and southwest direction during 1995–2017 period. Land scape metrics result revealed that the of urban patches density, total edge and edge density increased, while mean nearest neighbors’ distance decreased showing the tendency of sprawl.Keywords: landscape metrics, spatial patterns, remote sensing, multi-temporal, urban sprawl
Procedia PDF Downloads 2861830 A Literature Review on the Role of Local Potential for Creative Industries
Authors: Maya Irjayanti
Abstract:
Local creativity utilization has been a strategic investment to be expanded as a creative industry due to its significant contribution to the national gross domestic product. Many developed and developing countries look toward creative industries as an agenda for the economic growth. This study aims to identify the role of local potential for creative industries from various empirical studies. The method performed in this study will involve a peer-reviewed journal articles and conference papers review addressing local potential and creative industries. The literature review analysis will include several steps: material collection, descriptive analysis, category selection, and material evaluation. Finally, the outcome expected provides a creative industries clustering based on the local potential of various nations. In addition, the finding of this study will be used as future research reference to explore a particular area with well-known aspects of local potential for creative industry products.Keywords: business, creativity, local potential, local wisdom
Procedia PDF Downloads 3861829 Enhanced Acquisition Time of a Quantum Holography Scheme within a Nonlinear Interferometer
Authors: Sergio Tovar-Pérez, Sebastian Töpfer, Markus Gräfe
Abstract:
The work proposes a technique that decreases the detection acquisition time of quantum holography schemes down to one-third; this allows the possibility to image moving objects. Since its invention, quantum holography with undetected photon schemes has gained interest in the scientific community. This is mainly due to its ability to tailor the detected wavelengths according to the needs of the scheme implementation. Yet this wavelength flexibility grants the scheme a wide range of possible applications; an important matter was yet to be addressed. Since the scheme uses digital phase-shifting techniques to retrieve the information of the object out of the interference pattern, it is necessary to acquire a set of at least four images of the interference pattern along with well-defined phase steps to recover the full object information. Hence, the imaging method requires larger acquisition times to produce well-resolved images. As a consequence, the measurement of moving objects remains out of the reach of the imaging scheme. This work presents the use and implementation of a spatial light modulator along with a digital holographic technique called quasi-parallel phase-shifting. This technique uses the spatial light modulator to build a structured phase image consisting of a chessboard pattern containing the different phase steps for digitally calculating the object information. Depending on the reduction in the number of needed frames, the acquisition time reduces by a significant factor. This technique opens the door to the implementation of the scheme for moving objects. In particular, the application of this scheme in imaging alive specimens comes one step closer.Keywords: quasi-parallel phase shifting, quantum imaging, quantum holography, quantum metrology
Procedia PDF Downloads 1141828 Fast Short-Term Electrical Load Forecasting under High Meteorological Variability with a Multiple Equation Time Series Approach
Authors: Charline David, Alexandre Blondin Massé, Arnaud Zinflou
Abstract:
In 2016, Clements, Hurn, and Li proposed a multiple equation time series approach for the short-term load forecasting, reporting an average mean absolute percentage error (MAPE) of 1.36% on an 11-years dataset for the Queensland region in Australia. We present an adaptation of their model to the electrical power load consumption for the whole Quebec province in Canada. More precisely, we take into account two additional meteorological variables — cloudiness and wind speed — on top of temperature, as well as the use of multiple meteorological measurements taken at different locations on the territory. We also consider other minor improvements. Our final model shows an average MAPE score of 1:79% over an 8-years dataset.Keywords: short-term load forecasting, special days, time series, multiple equations, parallelization, clustering
Procedia PDF Downloads 1031827 Digitizing Masterpieces in Italian Museums: Techniques, Challenges and Consequences from Giotto to Caravaggio
Authors: Ginevra Addis
Abstract:
The possibility of reproducing physical artifacts in a digital format is one of the opportunities offered by the technological advancements in information and communication most frequently promoted by museums. Indeed, the study and conservation of our cultural heritage have seen significant advancement due to the three-dimensional acquisition and modeling technology. A variety of laser scanning systems has been developed, based either on optical triangulation or on time-of-flight measurement, capable of producing digital 3D images of complex structures with high resolution and accuracy. It is necessary, however, to explore the challenges and opportunities that this practice brings within museums. The purpose of this paper is to understand what change is introduced by digital techniques in those museums that are hosting digital masterpieces. The methodology used will investigate three distinguished Italian exhibitions, related to the territory of Milan, trying to analyze the following issues about museum practices: 1) how digitizing art masterpieces increases the number of visitors; 2) what the need that calls for the digitization of artworks; 3) which techniques are most used; 4) what the setting is; 5) the consequences of a non-publication of hard copies of catalogues; 6) envision of these practices in the future. Findings will show how interconnection plays an important role in rebuilding a collection spread all over the world. Secondly how digital artwork duplication and extension of reality entail new forms of accessibility. Thirdly, that collection and preservation through digitization of images have both a social and educational mission. Fourthly, that convergence of the properties of different media (such as web, radio) is key to encourage people to get actively involved in digital exhibitions. The present analysis will suggest further research that should create museum models and interaction spaces that act as catalysts for innovation.Keywords: digital masterpieces, education, interconnection, Italian museums, preservation
Procedia PDF Downloads 1751826 A Deleuzean Feminist Analysis of the Everyday, Gendered Performances of Teen Femininity: A Case Study on Snaps and Selfies in East London
Authors: Christine Redmond
Abstract:
This paper contributes to research on gendered, digital identities by exploring how selfies offer scope for disrupting and moving through gendered and racial ideals of feminine beauty. The selfie involves self-presentation, filters, captions, hashtags, online publishing, likes and more, constituting the relationship between subjectivity, practice and social use of selfies a complex process. Employing qualitative research methods on youth selfies in the UK, the author investigates interdisciplinary entangling between studies of social media and fields within gender, media and cultural studies, providing a material discursive treatment of the selfie as an embodied practice. Drawing on data collected from focus groups with teenage girls in East London, the study explores how girls experience and relate to selfies and snaps in their everyday lives. The author’s Deleuzean feminist approach suggests that bodies and selfies are not individual, disembodied entities between which there is a mediating inter-action. Instead, bodies and selfies are positioned as entangled to a point where it becomes unclear as to where a selfie ends and a body begins. Recognising selfies not just as images but as material and social assemblages opens up possibilities for unpacking the selfie in ways that move beyond the representational model in some studies of socially mediated digital images. The study reveals how the selfie functions to enable moments of empowerment within limiting, dominant ideologies of Euro-centrism, patriarchy and heteronormativity.Keywords: affect theory, femininity, gender, heteronormativity, photography, selfie, snapchat
Procedia PDF Downloads 2471825 Drape Simulation by Commercial Software and Subjective Assessment of Virtual Drape
Authors: Evrim Buyukaslan, Simona Jevsnik, Fatma Kalaoglu
Abstract:
Simulation of fabrics is more difficult than any other simulation due to complex mechanics of fabrics. Most of the virtual garment simulation software use mass-spring model and incorporate fabric mechanics into simulation models. The accuracy and fidelity of these virtual garment simulation software is a question mark. Drape is a subjective phenomenon and evaluation of drape has been studied since 1950’s. On the other hand, fabric and garment simulation is relatively new. Understanding drape perception of subjects when looking at fabric simulations is critical as virtual try-on becomes more of an issue by enhanced online apparel sales. Projected future of online apparel retailing is that users may view their avatars and try-on the garment on their avatars in the virtual environment. It is a well-known fact that users will not be eager to accept this innovative technology unless it is realistic enough. Therefore, it is essential to understand what users see when they are displaying fabrics in a virtual environment. Are they able to distinguish the differences between various fabrics in virtual environment? The purpose of this study is to investigate human perception when looking at a virtual fabric and determine the most visually noticeable drape parameter. To this end, five different fabrics are mechanically tested, and their drape simulations are generated by commercial garment simulation software (Optitex®). The simulation images are processed by an image analysis software to calculate drape parameters namely; drape coefficient, node severity, and peak angles. A questionnaire is developed to evaluate drape properties subjectively in a virtual environment. Drape simulation images are shown to 27 subjects and asked to rank the samples according to their questioned drape property. The answers are compared to the calculated drape parameters. The results show that subjects are quite sensitive to drape coefficient changes while they are not very sensitive to changes in node dimensions and node distributions.Keywords: drape simulation, drape evaluation, fabric mechanics, virtual fabric
Procedia PDF Downloads 3391824 Analytical Study of Data Mining Techniques for Software Quality Assurance
Authors: Mariam Bibi, Rubab Mehboob, Mehreen Sirshar
Abstract:
Satisfying the customer requirements is the ultimate goal of producing or developing any product. The quality of the product is decided on the bases of the level of customer satisfaction. There are different techniques which have been reported during the survey which enhance the quality of the product through software defect prediction and by locating the missing software requirements. Some mining techniques were proposed to assess the individual performance indicators in collaborative environment to reduce errors at individual level. The basic intention is to produce a product with zero or few defects thereby producing a best product quality wise. In the analysis of survey the techniques like Genetic algorithm, artificial neural network, classification and clustering techniques and decision tree are studied. After analysis it has been discovered that these techniques contributed much to the improvement and enhancement of the quality of the product.Keywords: data mining, defect prediction, missing requirements, software quality
Procedia PDF Downloads 4681823 Business Intelligence for Profiling of Telecommunication Customer
Authors: Rokhmatul Insani, Hira Laksmiwati Soemitro
Abstract:
Business Intelligence is a methodology that exploits the data to produce information and knowledge systematically, business intelligence can support the decision-making process. Some methods in business intelligence are data warehouse and data mining. A data warehouse can store historical data from transactional data. For data modelling in data warehouse, we apply dimensional modelling by Kimball. While data mining is used to extracting patterns from the data and get insight from the data. Data mining has many techniques, one of which is segmentation. For profiling of telecommunication customer, we use customer segmentation according to customer’s usage of services, customer invoice and customer payment. Customers can be grouped according to their characteristics and can be identified the profitable customers. We apply K-Means Clustering Algorithm for segmentation. The input variable for that algorithm we use RFM (Recency, Frequency and Monetary) model. All process in data mining, we use tools IBM SPSS modeller.Keywords: business intelligence, customer segmentation, data warehouse, data mining
Procedia PDF Downloads 4841822 Learning Grammars for Detection of Disaster-Related Micro Events
Authors: Josef Steinberger, Vanni Zavarella, Hristo Tanev
Abstract:
Natural disasters cause tens of thousands of victims and massive material damages. We refer to all those events caused by natural disasters, such as damage on people, infrastructure, vehicles, services and resource supply, as micro events. This paper addresses the problem of micro - event detection in online media sources. We present a natural language grammar learning algorithm and apply it to online news. The algorithm in question is based on distributional clustering and detection of word collocations. We also explore the extraction of micro-events from social media and describe a Twitter mining robot, who uses combinations of keywords to detect tweets which talk about effects of disasters.Keywords: online news, natural language processing, machine learning, event extraction, crisis computing, disaster effects, Twitter
Procedia PDF Downloads 4781821 The Event of Extreme Precipitation Occurred in the Metropolitan Mesoregion of the Capital of Para
Authors: Natasha Correa Vitória Bandeira, Lais Cordeiro Soares, Claudineia Brazil, Luciane Teresa Salvi
Abstract:
The intense rain event that occurred between February 16 and 18, 2018, in the city of Barcarena in Pará, located in the North region of Brazil, demonstrates the importance of analyzing this type of event. The metropolitan mesoregion of Belem was severely punished by rains much above the averages normally expected for that time of year; this phenomenon affected, in addition to the capital, the municipalities of Barcarena, Murucupi and Muruçambá. Resulting in a great flood in the rivers of the region, whose basins were affected with great intensity of precipitation, causing concern for the local population because in this region, there are located companies that accumulate ore tailings, and in this specific case, the dam of any of these companies, leaching the ore to the water bodies of the Murucupi River Basin. This article aims to characterize this phenomenon through a special analysis of the distribution of rainfall, using data from atmospheric soundings, satellite images, radar images and data from the GPCP (Global Precipitation Climatology Project), in addition to rainfall stations located in the study region. The results of the work demonstrated a dissociation between the data measured in the meteorological stations and the other forms of analysis of this extreme event. Monitoring carried out solely on the basis of data from pluviometric stations is not sufficient for monitoring and/or diagnosing extreme weather events, and investment by the competent bodies is important to install a larger network of pluviometric stations sufficient to meet the demand in a given region.Keywords: extreme precipitation, great flood, GPCP, ore dam
Procedia PDF Downloads 1081820 An Embarrassingly Simple Semi-supervised Approach to Increase Recall in Online Shopping Domain to Match Structured Data with Unstructured Data
Authors: Sachin Nagargoje
Abstract:
Complete labeled data is often difficult to obtain in a practical scenario. Even if one manages to obtain the data, the quality of the data is always in question. In shopping vertical, offers are the input data, which is given by advertiser with or without a good quality of information. In this paper, an author investigated the possibility of using a very simple Semi-supervised learning approach to increase the recall of unhealthy offers (has badly written Offer Title or partial product details) in shopping vertical domain. The author found that the semisupervised learning method had improved the recall in the Smart Phone category by 30% on A=B testing on 10% traffic and increased the YoY (Year over Year) number of impressions per month by 33% at production. This also made a significant increase in Revenue, but that cannot be publicly disclosed.Keywords: semi-supervised learning, clustering, recall, coverage
Procedia PDF Downloads 1221819 Cognitive Linguistic Features Underlying Spelling Development in a Second Language: A Case Study of L2 Spellers in South Africa
Authors: A. Van Staden, A. Tolmie, E. Vorster
Abstract:
Research confirms the multifaceted nature of spelling development and underscores the importance of both cognitive and linguistic skills that affect sound spelling development such as working and long-term memory, phonological and orthographic awareness, mental orthographic images, semantic knowledge and morphological awareness. This has clear implications for many South African English second language spellers (L2) who attempt to become proficient spellers. Since English has an opaque orthography, with irregular spelling patterns and insufficient sound/grapheme correspondences, L2 spellers can neither rely, nor draw on the phonological awareness skills of their first language (for example Sesotho and many other African languages), to assist them to spell the majority of English words. Epistemologically, this research is informed by social constructivism. In addition the researchers also hypothesized that the principles of the Overlapping Waves Theory was an appropriate lens through which to investigate whether L2 spellers could significantly improve their spelling skills via the implementation of an alternative route to spelling development, namely the orthographic route, and more specifically via the application of visual imagery. Post-test results confirmed the results of previous research that argues for the interactive nature of different cognitive and linguistic systems such as working memory and its subsystems and long-term memory, as learners were systematically guided to store visual orthographic images of words in their long-term lexicons. Moreover, the results have shown that L2 spellers in the experimental group (n = 9) significantly outperformed L2 spellers (n = 9) in the control group whose intervention involved phonological awareness (and coding) including the teaching of spelling rules. Consequently, L2 learners in the experimental group significantly improved in all the post-test measures included in this investigation, namely the four sub-tests of short-term memory; as well as two spelling measures (i.e. diagnostic and standardized measures). Against this background, the findings of this study look promising and have shown that, within a social-constructivist learning environment, learners can be systematically guided to apply higher-order thinking processes such as visual imagery to successfully store and retrieve mental images of spelling words from their output lexicons. Moreover, results from the present study could play an important role in directing research into this under-researched aspect of L2 literacy development within the South African education context.Keywords: English second language spellers, phonological and orthographic coding, social constructivism, visual imagery as spelling strategy
Procedia PDF Downloads 3591818 Upgrading of Problem-Based Learning with Educational Multimedia to the Undergraduate Students
Authors: Sharifa Alduraibi, Abir El Sadik, Ahmed Elzainy, Alaa Alduraibi, Ahmed Alsolai
Abstract:
Introduction: Problem-based learning (PBL) is an active student-centered educational modality, influenced by the students' interest that required continuous motivation to improve their engagement. The new era of professional information technology facilitated the utilization of educational multimedia, such as videos, soundtracks, and photographs promoting students' learning. The aim of the present study was to introduce multimedia-enriched PBL scenarios for the first time in college of medicine, Qassim University, as an incentive for better students' engagement. In addition, students' performance and satisfaction were evaluated. Methodology: Two multimedia-enhanced PBL scenarios were implemented to the third years' students in the urinary system block. Radiological images, plain CT scan, and X-ray of the abdomen and renal nuclear scan correlated with their pathological gross photographs were added to the scenarios. One week before the first sessions, pre-recorded orientation videos for PBL tutors were submitted to clarify the multimedia incorporated in the scenarios. Other two traditional PBL scenarios devoid of multimedia demonstrating the pathological and radiological findings were designed. Results and Discussion: Comparison between the formative assessments' results by the end of the two PBL modalities was done. It revealed significant increase in students' engagement, critical thinking and practical reasoning skills during the multimedia-enhanced sessions. Students' perception survey showed great satisfaction with the new strategy. Conclusion: It could be concluded from the current work that multimedia created technology-based teaching strategy inspiring the student for self-directed thinking and promoting students' overall achievement.Keywords: multimedia, pathology and radiology images, problem-based learning, videos
Procedia PDF Downloads 157