Search results for: SURF(Speed-Up Robust Features)
4579 Robust Method for Evaluation of Catchment Response to Rainfall Variations Using Vegetation Indices and Surface Temperature
Authors: Revalin Herdianto
Abstract:
Recent climate changes increase uncertainties in vegetation conditions such as health and biomass globally and locally. The detection is, however, difficult due to the spatial and temporal scale of vegetation coverage. Due to unique vegetation response to its environmental conditions such as water availability, the interplay between vegetation dynamics and hydrologic conditions leave a signature in their feedback relationship. Vegetation indices (VI) depict vegetation biomass and photosynthetic capacity that indicate vegetation dynamics as a response to variables including hydrologic conditions and microclimate factors such as rainfall characteristics and land surface temperature (LST). It is hypothesized that the signature may be depicted by VI in its relationship with other variables. To study this signature, several catchments in Asia, Australia, and Indonesia were analysed to assess the variations in hydrologic characteristics with vegetation types. Methods used in this study includes geographic identification and pixel marking for studied catchments, analysing time series of VI and LST of the marked pixels, smoothing technique using Savitzky-Golay filter, which is effective for large area and extensive data. Time series of VI, LST, and rainfall from satellite and ground stations coupled with digital elevation models were analysed and presented. This study found that the hydrologic response of vegetation to rainfall variations may be shown in one hydrologic year, in which a drought event can be detected a year later as a suppressed growth. However, an annual rainfall of above average do not promote growth above average as shown by VI. This technique is found to be a robust and tractable approach for assessing catchment dynamics in changing climates.Keywords: vegetation indices, land surface temperature, vegetation dynamics, catchment
Procedia PDF Downloads 2874578 A Quantitative Evaluation of Text Feature Selection Methods
Authors: B. S. Harish, M. B. Revanasiddappa
Abstract:
Due to rapid growth of text documents in digital form, automated text classification has become an important research in the last two decades. The major challenge of text document representations are high dimension, sparsity, volume and semantics. Since the terms are only features that can be found in documents, selection of good terms (features) plays an very important role. In text classification, feature selection is a strategy that can be used to improve classification effectiveness, computational efficiency and accuracy. In this paper, we present a quantitative analysis of most widely used feature selection (FS) methods, viz. Term Frequency-Inverse Document Frequency (tfidf ), Mutual Information (MI), Information Gain (IG), CHISquare (x2), Term Frequency-Relevance Frequency (tfrf ), Term Strength (TS), Ambiguity Measure (AM) and Symbolic Feature Selection (SFS) to classify text documents. We evaluated all the feature selection methods on standard datasets like 20 Newsgroups, 4 University dataset and Reuters-21578.Keywords: classifiers, feature selection, text classification
Procedia PDF Downloads 4594577 Investigation of New Gait Representations for Improving Gait Recognition
Authors: Chirawat Wattanapanich, Hong Wei
Abstract:
This study presents new gait representations for improving gait recognition accuracy on cross gait appearances, such as normal walking, wearing a coat and carrying a bag. Based on the Gait Energy Image (GEI), two ideas are implemented to generate new gait representations. One is to append lower knee regions to the original GEI, and the other is to apply convolutional operations to the GEI and its variants. A set of new gait representations are created and used for training multi-class Support Vector Machines (SVMs). Tests are conducted on the CASIA dataset B. Various combinations of the gait representations with different convolutional kernel size and different numbers of kernels used in the convolutional processes are examined. Both the entire images as features and reduced dimensional features by Principal Component Analysis (PCA) are tested in gait recognition. Interestingly, both new techniques, appending the lower knee regions to the original GEI and convolutional GEI, can significantly contribute to the performance improvement in the gait recognition. The experimental results have shown that the average recognition rate can be improved from 75.65% to 87.50%.Keywords: convolutional image, lower knee, gait
Procedia PDF Downloads 2024576 Identification of High-Rise Buildings Using Object Based Classification and Shadow Extraction Techniques
Authors: Subham Kharel, Sudha Ravindranath, A. Vidya, B. Chandrasekaran, K. Ganesha Raj, T. Shesadri
Abstract:
Digitization of urban features is a tedious and time-consuming process when done manually. In addition to this problem, Indian cities have complex habitat patterns and convoluted clustering patterns, which make it even more difficult to map features. This paper makes an attempt to classify urban objects in the satellite image using object-oriented classification techniques in which various classes such as vegetation, water bodies, buildings, and shadows adjacent to the buildings were mapped semi-automatically. Building layer obtained as a result of object-oriented classification along with already available building layers was used. The main focus, however, lay in the extraction of high-rise buildings using spatial technology, digital image processing, and modeling, which would otherwise be a very difficult task to carry out manually. Results indicated a considerable rise in the total number of buildings in the city. High-rise buildings were successfully mapped using satellite imagery, spatial technology along with logical reasoning and mathematical considerations. The results clearly depict the ability of Remote Sensing and GIS to solve complex problems in urban scenarios like studying urban sprawl and identification of more complex features in an urban area like high-rise buildings and multi-dwelling units. Object-Oriented Technique has been proven to be effective and has yielded an overall efficiency of 80 percent in the classification of high-rise buildings.Keywords: object oriented classification, shadow extraction, high-rise buildings, satellite imagery, spatial technology
Procedia PDF Downloads 1554575 Identification of Vessel Class with Long Short-Term Memory Using Kinematic Features in Maritime Traffic Control
Authors: Davide Fuscà, Kanan Rahimli, Roberto Leuzzi
Abstract:
Preventing abuse and illegal activities in a given area of the sea is a very difficult and expensive task. Artificial intelligence offers the possibility to implement new methods to identify the vessel class type from the kinematic features of the vessel itself. The task strictly depends on the quality of the data. This paper explores the application of a deep, long short-term memory model by using AIS flow only with a relatively low quality. The proposed model reaches high accuracy on detecting nine vessel classes representing the most common vessel types in the Ionian-Adriatic Sea. The model has been applied during the Adriatic-Ionian trial period of the international EU ANDROMEDA H2020 project to identify vessels performing behaviors far from the expected one depending on the declared type.Keywords: maritime surveillance, artificial intelligence, behavior analysis, LSTM
Procedia PDF Downloads 2314574 A Research and Application of Feature Selection Based on IWO and Tabu Search
Authors: Laicheng Cao, Xiangqian Su, Youxiao Wu
Abstract:
Feature selection is one of the important problems in network security, pattern recognition, data mining and other fields. In order to remove redundant features, effectively improve the detection speed of intrusion detection system, proposes a new feature selection method, which is based on the invasive weed optimization (IWO) algorithm and tabu search algorithm(TS). Use IWO as a global search, tabu search algorithm for local search, to improve the results of IWO algorithm. The experimental results show that the feature selection method can effectively remove the redundant features of network data information in feature selection, reduction time, and to guarantee accurate detection rate, effectively improve the speed of detection system.Keywords: intrusion detection, feature selection, iwo, tabu search
Procedia PDF Downloads 5304573 Medical Diagnosis of Retinal Diseases Using Artificial Intelligence Deep Learning Models
Authors: Ethan James
Abstract:
Over one billion people worldwide suffer from some level of vision loss or blindness as a result of progressive retinal diseases. Many patients, particularly in developing areas, are incorrectly diagnosed or undiagnosed whatsoever due to unconventional diagnostic tools and screening methods. Artificial intelligence (AI) based on deep learning (DL) convolutional neural networks (CNN) have recently gained a high interest in ophthalmology for its computer-imaging diagnosis, disease prognosis, and risk assessment. Optical coherence tomography (OCT) is a popular imaging technique used to capture high-resolution cross-sections of retinas. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography, and visual fields, achieving robust classification performance in the detection of various retinal diseases including macular degeneration, diabetic retinopathy, and retinitis pigmentosa. However, there is no complete diagnostic model to analyze these retinal images that provide a diagnostic accuracy above 90%. Thus, the purpose of this project was to develop an AI model that utilizes machine learning techniques to automatically diagnose specific retinal diseases from OCT scans. The algorithm consists of neural network architecture that was trained from a dataset of over 20,000 real-world OCT images to train the robust model to utilize residual neural networks with cyclic pooling. This DL model can ultimately aid ophthalmologists in diagnosing patients with these retinal diseases more quickly and more accurately, therefore facilitating earlier treatment, which results in improved post-treatment outcomes.Keywords: artificial intelligence, deep learning, imaging, medical devices, ophthalmic devices, ophthalmology, retina
Procedia PDF Downloads 1814572 User-Awareness from Eye Line Tracing During Specification Writing to Improve Specification Quality
Authors: Yoshinori Wakatake
Abstract:
Many defects after the release of software packages are caused due to omissions of sufficient test items in test specifications. Poor test specifications are detected by manual review, which imposes a high human load. The prevention of omissions depends on the end-user awareness of test specification writers. If test specifications were written while envisioning the behavior of end-users, the number of omissions in test items would be greatly reduced. The paper pays attention to the point that writers who can achieve it differ from those who cannot in not only the description richness but also their gaze information. It proposes a method to estimate the degree of user-awareness of writers through the analysis of their gaze information when writing test specifications. We conduct an experiment to obtain the gaze information of a writer of the test specifications. Test specifications are automatically classified using gaze information. In this method, a Random Forest model is constructed for the classification. The classification is highly accurate. By looking at the explanatory variables which turn out to be important variables, we know behavioral features to distinguish test specifications of high quality from others. It is confirmed they are pupil diameter size and the number and the duration of blinks. The paper also investigates test specifications automatically classified with gaze information to discuss features in their writing ways in each quality level. The proposed method enables us to automatically classify test specifications. It also prevents test item omissions, because it reveals writing features that test specifications of high quality should satisfy.Keywords: blink, eye tracking, gaze information, pupil diameter, quality improvement, specification document, user-awareness
Procedia PDF Downloads 644571 Statistical Feature Extraction Method for Wood Species Recognition System
Authors: Mohd Iz'aan Paiz Bin Zamri, Anis Salwa Mohd Khairuddin, Norrima Mokhtar, Rubiyah Yusof
Abstract:
Effective statistical feature extraction and classification are important in image-based automatic inspection and analysis. An automatic wood species recognition system is designed to perform wood inspection at custom checkpoints to avoid mislabeling of timber which will results to loss of income to the timber industry. The system focuses on analyzing the statistical pores properties of the wood images. This paper proposed a fuzzy-based feature extractor which mimics the experts’ knowledge on wood texture to extract the properties of pores distribution from the wood surface texture. The proposed feature extractor consists of two steps namely pores extraction and fuzzy pores management. The total number of statistical features extracted from each wood image is 38 features. Then, a backpropagation neural network is used to classify the wood species based on the statistical features. A comprehensive set of experiments on a database composed of 5200 macroscopic images from 52 tropical wood species was used to evaluate the performance of the proposed feature extractor. The advantage of the proposed feature extraction technique is that it mimics the experts’ interpretation on wood texture which allows human involvement when analyzing the wood texture. Experimental results show the efficiency of the proposed method.Keywords: classification, feature extraction, fuzzy, inspection system, image analysis, macroscopic images
Procedia PDF Downloads 4264570 Predicting Shot Making in Basketball Learnt Fromadversarial Multiagent Trajectories
Authors: Mark Harmon, Abdolghani Ebrahimi, Patrick Lucey, Diego Klabjan
Abstract:
In this paper, we predict the likelihood of a player making a shot in basketball from multiagent trajectories. Previous approaches to similar problems center on hand-crafting features to capture domain-specific knowledge. Although intuitive, recent work in deep learning has shown, this approach is prone to missing important predictive features. To circumvent this issue, we present a convolutional neural network (CNN) approach where we initially represent the multiagent behavior as an image. To encode the adversarial nature of basketball, we use a multichannel image which we then feed into a CNN. Additionally, to capture the temporal aspect of the trajectories, we use “fading.” We find that this approach is superior to a traditional FFN model. By using gradient ascent, we were able to discover what the CNN filters look for during training. Last, we find that a combined FFN+CNN is the best performing network with an error rate of 39%.Keywords: basketball, computer vision, image processing, convolutional neural network
Procedia PDF Downloads 1534569 Design and Analysis of Proximity Fed Single Band Microstrip Patch Antenna with Parasitic Lines
Authors: Inderpreet Kaur, Sukhjit Kaur, Balwinder Singh Sohi
Abstract:
The design proposed in this paper mainly focuses on implementation of a single feed compact rectangular microstrip patch antenna (MSA) for single band application. The antenna presented here also works in dual band but its best performance has been obtained when optimised to work in single band mode. In this paper, a new feeding structure is applied in the patch antenna design to overcome undesirable features of the earlier multilayer feeding structures while maintaining their interesting features.To make the proposed antenna more efficient the optimization of the antenna design parameters have been done using HFSS’s optometric. For the proposed antenna one resonant frequency has been obtained at 6.03GHz, with Bandwidth of 167MHz and return loss of -33.82db. The characteristics of the designed structure are investigated by using FEM based electromagnetic solver.Keywords: bandwidth, retun loss, parasitic lines, microstrip antenna
Procedia PDF Downloads 4634568 Speech Emotion Recognition with Bi-GRU and Self-Attention based Feature Representation
Authors: Bubai Maji, Monorama Swain
Abstract:
Speech is considered an essential and most natural medium for the interaction between machines and humans. However, extracting effective features for speech emotion recognition (SER) is remains challenging. The present studies show that the temporal information captured but high-level temporal-feature learning is yet to be investigated. In this paper, we present an efficient novel method using the Self-attention (SA) mechanism in a combination of Convolutional Neural Network (CNN) and Bi-directional Gated Recurrent Unit (Bi-GRU) network to learn high-level temporal-feature. In order to further enhance the representation of the high-level temporal-feature, we integrate a Bi-GRU output with learnable weights features by SA, and improve the performance. We evaluate our proposed method on our created SITB-OSED and IEMOCAP databases. We report that the experimental results of our proposed method achieve state-of-the-art performance on both databases.Keywords: Bi-GRU, 1D-CNNs, self-attention, speech emotion recognition
Procedia PDF Downloads 1134567 Improving Lane Detection for Autonomous Vehicles Using Deep Transfer Learning
Authors: Richard O’Riordan, Saritha Unnikrishnan
Abstract:
Autonomous Vehicles (AVs) are incorporating an increasing number of ADAS features, including automated lane-keeping systems. In recent years, many research papers into lane detection algorithms have been published, varying from computer vision techniques to deep learning methods. The transition from lower levels of autonomy defined in the SAE framework and the progression to higher autonomy levels requires increasingly complex models and algorithms that must be highly reliable in their operation and functionality capacities. Furthermore, these algorithms have no room for error when operating at high levels of autonomy. Although the current research details existing computer vision and deep learning algorithms and their methodologies and individual results, the research also details challenges faced by the algorithms and the resources needed to operate, along with shortcomings experienced during their detection of lanes in certain weather and lighting conditions. This paper will explore these shortcomings and attempt to implement a lane detection algorithm that could be used to achieve improvements in AV lane detection systems. This paper uses a pre-trained LaneNet model to detect lane or non-lane pixels using binary segmentation as the base detection method using an existing dataset BDD100k followed by a custom dataset generated locally. The selected roads will be modern well-laid roads with up-to-date infrastructure and lane markings, while the second road network will be an older road with infrastructure and lane markings reflecting the road network's age. The performance of the proposed method will be evaluated on the custom dataset to compare its performance to the BDD100k dataset. In summary, this paper will use Transfer Learning to provide a fast and robust lane detection algorithm that can handle various road conditions and provide accurate lane detection.Keywords: ADAS, autonomous vehicles, deep learning, LaneNet, lane detection
Procedia PDF Downloads 1044566 A Comparative Study for Various Techniques Using WEKA for Red Blood Cells Classification
Authors: Jameela Ali, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy
Abstract:
Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifyig the red blood cells as normal, or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithm tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital-Malaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectivelyKeywords: red blood cells, classification, radial basis function neural networks, suport vector machine, k-nearest neighbors algorithm
Procedia PDF Downloads 4804565 Ectopic Osteoinduction of Porous Composite Scaffolds Reinforced with Graphene Oxide and Hydroxyapatite Gradient Density
Authors: G. M. Vlasceanu, H. Iovu, E. Vasile, M. Ionita
Abstract:
Herein, the synthesis and characterization of chitosan-gelatin highly porous scaffold reinforced with graphene oxide, and hydroxyapatite (HAp), crosslinked with genipin was targeted. In tissue engineering, chitosan and gelatin are two of the most robust biopolymers with wide applicability due to intrinsic biocompatibility, biodegradability, low antigenicity properties, affordability, and ease of processing. HAp, per its exceptional activity in tuning cell-matrix interactions, is acknowledged for its capability of sustaining cellular proliferation by promoting bone-like native micro-media for cell adjustment. Genipin is regarded as a top class cross-linker, while graphene oxide (GO) is viewed as one of the most performant and versatile fillers. The composites with natural bone HAp/biopolymer ratio were obtained by cascading sonochemical treatments, followed by uncomplicated casting methods and by freeze-drying. Their structure was characterized by Fourier Transform Infrared Spectroscopy and X-ray Diffraction, while overall morphology was investigated by Scanning Electron Microscopy (SEM) and micro-Computer Tomography (µ-CT). Ensuing that, in vitro enzyme degradation was performed to detect the most promising compositions for the development of in vivo assays. Suitable GO dispersion was ascertained within the biopolymer mix as nanolayers specific signals lack in both FTIR and XRD spectra, and the specific spectral features of the polymers persisted with GO load enhancement. Overall, correlations between the GO induced material structuration, crystallinity variations, and chemical interaction of the compounds can be correlated with the physical features and bioactivity of each composite formulation. Moreover, the HAp distribution within follows an auspicious density gradient tuned for hybrid osseous/cartilage matter architectures, which were mirrored in the mice model tests. Hence, the synthesis route of a natural polymer blend/hydroxyapatite-graphene oxide composite material is anticipated to emerge as influential formulation in bone tissue engineering. Acknowledgement: This work was supported by the project 'Work-based learning systems using entrepreneurship grants for doctoral and post-doctoral students' (Sisteme de invatare bazate pe munca prin burse antreprenor pentru doctoranzi si postdoctoranzi) - SIMBA, SMIS code 124705 and by a grant of the National Authority for Scientific Research and Innovation, Operational Program Competitiveness Axis 1 - Section E, Program co-financed from European Regional Development Fund 'Investments for your future' under the project number 154/25.11.2016, P_37_221/2015. The nano-CT experiments were possible due to European Regional Development Fund through Competitiveness Operational Program 2014-2020, Priority axis 1, ID P_36_611, MySMIS code 107066, INOVABIOMED.Keywords: biopolymer blend, ectopic osteoinduction, graphene oxide composite, hydroxyapatite
Procedia PDF Downloads 1044564 Epileptic Seizure Prediction Focusing on Relative Change in Consecutive Segments of EEG Signal
Authors: Mohammad Zavid Parvez, Manoranjan Paul
Abstract:
Epilepsy is a common neurological disorders characterized by sudden recurrent seizures. Electroencephalogram (EEG) is widely used to diagnose possible epileptic seizure. Many research works have been devoted to predict epileptic seizure by analyzing EEG signal. Seizure prediction by analyzing EEG signals are challenging task due to variations of brain signals of different patients. In this paper, we propose a new approach for feature extraction based on phase correlation in EEG signals. In phase correlation, we calculate relative change between two consecutive segments of an EEG signal and then combine the changes with neighboring signals to extract features. These features are then used to classify preictal/ictal and interictal EEG signals for seizure prediction. Experiment results show that the proposed method carries good prediction rate with greater consistence for the benchmark data set in different brain locations compared to the existing state-of-the-art methods.Keywords: EEG, epilepsy, phase correlation, seizure
Procedia PDF Downloads 3094563 Addressing Challenging Behaviours of Individuals with Positive Behaviour Support
Authors: Divi Sharma
Abstract:
The emergence of positive behaviour support (PBS) is directly linked to applied behaviour analysis that incorporates evidence-based approaches to addressing ethical challenges and improving autonomy, participation, and the overall quality of life of people living and learning in complex social environments. Its features include lifestyle improvement, collaboration with general caregivers, tracking progress with sound steps, comprehensive performance-based interventions, striving for contextual equality, and ensuring entry and implementation. This document aims to summarize its features with the support of case examples such as involving caregivers to play an active role in behavioural interventions, creating effective interventions within natural practices. Additionally, dealing with lifestyle changes, as well as a wide variety of behavioural changes, develop strong strategies which reduce professional dependence.Keywords: positive behaviour support, quality of life, performance-based interventions, behavioural changes, participation
Procedia PDF Downloads 1704562 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis
Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya
Abstract:
In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.Keywords: cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis
Procedia PDF Downloads 3264561 Optical Flow Localisation and Appearance Mapping (OFLAAM) for Long-Term Navigation
Authors: Daniel Pastor, Hyo-Sang Shin
Abstract:
This paper presents a novel method to use optical flow navigation for long-term navigation. Unlike standard SLAM approaches for augmented reality, OFLAAM is designed for Micro Air Vehicles (MAV). It uses an optical flow camera pointing downwards, an IMU and a monocular camera pointing frontwards. That configuration avoids the expensive mapping and tracking of the 3D features. It only maps these features in a vocabulary list by a localization module to tackle the loss of the navigation estimation. That module, based on the well-established algorithm DBoW2, will be also used to close the loop and allow long-term navigation in confined areas. That combination of high-speed optical flow navigation with a low rate localization algorithm allows fully autonomous navigation for MAV, at the same time it reduces the overall computational load. This framework is implemented in ROS (Robot Operating System) and tested attached to a laptop. A representative scenarios is used to analyse the performance of the system.Keywords: vision, UAV, navigation, SLAM
Procedia PDF Downloads 6064560 Novel Fluorescent High Density Polyethylene Composites for Fused Deposition Modeling 3D Printing in Packaging Security Features
Authors: Youssef R. Hassan, Mohamed S. Hasanin, Reda M. Abdelhameed
Abstract:
Recently, innovations in packaging security features become more important to see the originality of packaging in industrial application. Luminescent 3d printing materials have been a promising property which can provides a unique opportunity for the design and application of 3D printing. Lack emission of terbium ions, as a source of green emission, in salt form prevent its uses in industrial applications, so searching about stable and highly emitter material become essential. Nowadays, metal organic frameworks (MOFs) play an important role in designing light emitter material. In this work, fluorescent high density polyethylene (FHDPE) composite filament with Tb-benzene 1,3,5-tricarboxylate (Tb-BTC) MOFs for 3D printing have been successfully developed.HDPE pellets were mixed with Tb-BTC and melting extrustion with single screw extruders. It was found that Tb-BTCuniformly dispersed in the HDPE matrix and significantly increased the crystallinity of PE, which not only maintained the good thermal property but also improved the mechanical properties of Tb-BTC@HDPE composites. Notably, the composite filaments emitted ultra-bright green light under UV lamp, and the fluorescence intensity increased as the content of Tb-BTC increased. Finally, several brightly luminescent exquisite articles could be manufactured by fused deposition modeling (FDM) 3D printer with these new fluorescent filaments. In this context, the development of novel fluorescent Tb-BTC@HDPE composites was combined with 3D printing technology to amplified the packaging Security Features.Keywords: 3D printing, fluorescent, packaging, security
Procedia PDF Downloads 1014559 Stream Extraction from 1m-DTM Using ArcGIS
Authors: Jerald Ruta, Ricardo Villar, Jojemar Bantugan, Nycel Barbadillo, Jigg Pelayo
Abstract:
Streams are important in providing water supply for industrial, agricultural and human consumption, In short when there are streams there are lives. Identifying streams are essential since many developed cities are situated in the vicinity of these bodies of water and in flood management, it serves as basin for surface runoff within the area. This study aims to process and generate features from high-resolution digital terrain model (DTM) with 1-meter resolution using Hydrology Tools of ArcGIS. The raster was then filled, processed flow direction and accumulation, then raster calculate and provide stream order, converted to vector, and clearing undesirable features using the ancillary or google earth. In field validation streams were classified whether perennial, intermittent or ephemeral. Results show more than 90% of the extracted feature were accurate in assessment through field validation.Keywords: digital terrain models, hydrology tools, strahler method, stream classification
Procedia PDF Downloads 2724558 Simulation and Experimental Study on Dual Dense Medium Fluidization Features of Air Dense Medium Fluidized Bed
Authors: Cheng Sheng, Yuemin Zhao, Chenlong Duan
Abstract:
Air dense medium fluidized bed is a typical application of fluidization techniques for coal particle separation in arid areas, where it is costly to implement wet coal preparation technologies. In the last three decades, air dense medium fluidized bed, as an efficient dry coal separation technique, has been studied in many aspects, including energy and mass transfer, hydrodynamics, bubbling behaviors, etc. Despite numerous researches have been published, the fluidization features, especially dual dense medium fluidization features have been rarely reported. In dual dense medium fluidized beds, different combinations of different dense mediums play a significant role in fluidization quality variation, thus influencing coal separation efficiency. Moreover, to what extent different dense mediums mix and to what extent the two-component particulate mixture affects the fluidization performance and quality have been in suspense. The proposed work attempts to reveal underlying mechanisms of generation and evolution of two-component particulate mixture in the fluidization process. Based on computational fluid dynamics methods and discrete particle modelling, movement and evolution of dual dense mediums in air dense medium fluidized bed have been simulated. Dual dense medium fluidization experiments have been conducted. Electrical capacitance tomography was employed to investigate the distribution of two-component mixture in experiments. Underlying mechanisms involving two-component particulate fluidization are projected to be demonstrated with the analysis and comparison of simulation and experimental results.Keywords: air dense medium fluidized bed, particle separation, computational fluid dynamics, discrete particle modelling
Procedia PDF Downloads 3824557 Learning Dynamic Representations of Nodes in Temporally Variant Graphs
Authors: Sandra Mitrovic, Gaurav Singh
Abstract:
In many industries, including telecommunications, churn prediction has been a topic of active research. A lot of attention has been drawn on devising the most informative features, and this area of research has gained even more focus with spread of (social) network analytics. The call detail records (CDRs) have been used to construct customer networks and extract potentially useful features. However, to the best of our knowledge, no studies including network features have yet proposed a generic way of representing network information. Instead, ad-hoc and dataset dependent solutions have been suggested. In this work, we build upon a recently presented method (node2vec) to obtain representations for nodes in observed network. The proposed approach is generic and applicable to any network and domain. Unlike node2vec, which assumes a static network, we consider a dynamic and time-evolving network. To account for this, we propose an approach that constructs the feature representation of each node by generating its node2vec representations at different timestamps, concatenating them and finally compressing using an auto-encoder-like method in order to retain reasonably long and informative feature vectors. We test the proposed method on churn prediction task in telco domain. To predict churners at timestamp ts+1, we construct training and testing datasets consisting of feature vectors from time intervals [t1, ts-1] and [t2, ts] respectively, and use traditional supervised classification models like SVM and Logistic Regression. Observed results show the effectiveness of proposed approach as compared to ad-hoc feature selection based approaches and static node2vec.Keywords: churn prediction, dynamic networks, node2vec, auto-encoders
Procedia PDF Downloads 3144556 Integrative Transcriptomic Profiling of NK Cells and Monocytes: Advancing Diagnostic and Therapeutic Strategies for COVID-19
Authors: Salma Loukman, Reda Benmrid, Najat Bouchmaa, Hicham Hboub, Rachid El Fatimy, Rachid Benhida
Abstract:
In this study, it use integrated transcriptomic datasets from the GEO repository with the purpose of investigating immune dysregulation in COVID-19. Thus, in this context, we decided to be focused on NK cells and CD14+ monocytes gene expression, considering datasets GSE165461 and GSE198256, respectively. Other datasets with PBMCs, lung, olfactory, and sensory epithelium and lymph were used to provide robust validation for our results. This approach gave an integrated view of the immune responses in COVID-19, pointing out a set of potential biomarkers and therapeutic targets with special regard to standards of physiological conditions. IFI27, MKI67, CENPF, MBP, HBA2, TMEM158, THBD, HBA1, LHFPL2, SLA, and AC104564.3 were identified as key genes from our analysis that have critical biological processes related to inflammation, immune regulation, oxidative stress, and metabolic processes. Consequently, such processes are important in understanding the heterogeneous clinical manifestations of COVID-19—from acute to long-term effects now known as 'long COVID'. Subsequent validation with additional datasets consolidated these genes as robust biomarkers with an important role in the diagnosis of COVID-19 and the prediction of its severity. Moreover, their enrichment in key pathophysiological pathways presented them as potential targets for therapeutic intervention.The results provide insight into the molecular dynamics of COVID-19 caused by cells such as NK cells and other monocytes. Thus, this study constitutes a solid basis for targeted diagnostic and therapeutic development and makes relevant contributions to ongoing research efforts toward better management and mitigation of the pandemic.Keywords: SARS-COV-2, RNA-seq, biomarkers, severity, long COVID-19, bio analysis
Procedia PDF Downloads 124555 Machine Learning Framework: Competitive Intelligence and Key Drivers Identification of Market Share Trends among Healthcare Facilities
Authors: Anudeep Appe, Bhanu Poluparthi, Lakshmi Kasivajjula, Udai Mv, Sobha Bagadi, Punya Modi, Aditya Singh, Hemanth Gunupudi, Spenser Troiano, Jeff Paul, Justin Stovall, Justin Yamamoto
Abstract:
The necessity of data-driven decisions in healthcare strategy formulation is rapidly increasing. A reliable framework which helps identify factors impacting a healthcare provider facility or a hospital (from here on termed as facility) market share is of key importance. This pilot study aims at developing a data-driven machine learning-regression framework which aids strategists in formulating key decisions to improve the facility’s market share which in turn impacts in improving the quality of healthcare services. The US (United States) healthcare business is chosen for the study, and the data spanning 60 key facilities in Washington State and about 3 years of historical data is considered. In the current analysis, market share is termed as the ratio of the facility’s encounters to the total encounters among the group of potential competitor facilities. The current study proposes a two-pronged approach of competitor identification and regression approach to evaluate and predict market share, respectively. Leveraged model agnostic technique, SHAP, to quantify the relative importance of features impacting the market share. Typical techniques in literature to quantify the degree of competitiveness among facilities use an empirical method to calculate a competitive factor to interpret the severity of competition. The proposed method identifies a pool of competitors, develops Directed Acyclic Graphs (DAGs) and feature level word vectors, and evaluates the key connected components at the facility level. This technique is robust since its data-driven, which minimizes the bias from empirical techniques. The DAGs factor in partial correlations at various segregations and key demographics of facilities along with a placeholder to factor in various business rules (for ex. quantifying the patient exchanges, provider references, and sister facilities). Identified are the multiple groups of competitors among facilities. Leveraging the competitors' identified developed and fine-tuned Random Forest Regression model to predict the market share. To identify key drivers of market share at an overall level, permutation feature importance of the attributes was calculated. For relative quantification of features at a facility level, incorporated SHAP (SHapley Additive exPlanations), a model agnostic explainer. This helped to identify and rank the attributes at each facility which impacts the market share. This approach proposes an amalgamation of the two popular and efficient modeling practices, viz., machine learning with graphs and tree-based regression techniques to reduce the bias. With these, we helped to drive strategic business decisions.Keywords: competition, DAGs, facility, healthcare, machine learning, market share, random forest, SHAP
Procedia PDF Downloads 914554 AS-Geo: Arbitrary-Sized Image Geolocalization with Learnable Geometric Enhancement Resizer
Authors: Huayuan Lu, Chunfang Yang, Ma Zhu, Baojun Qi, Yaqiong Qiao, Jiangqian Xu
Abstract:
Image geolocalization has great application prospects in fields such as autonomous driving and virtual/augmented reality. In practical application scenarios, the size of the image to be located is not fixed; it is impractical to train different networks for all possible sizes. When its size does not match the size of the input of the descriptor extraction model, existing image geolocalization methods usually directly scale or crop the image in some common ways. This will result in the loss of some information important to the geolocalization task, thus affecting the performance of the image geolocalization method. For example, excessive down-sampling can lead to blurred building contour, and inappropriate cropping can lead to the loss of key semantic elements, resulting in incorrect geolocation results. To address this problem, this paper designs a learnable image resizer and proposes an arbitrary-sized image geolocation method. (1) The designed learnable image resizer employs the self-attention mechanism to enhance the geometric features of the resized image. Firstly, it applies bilinear interpolation to the input image and its feature maps to obtain the initial resized image and the resized feature maps. Then, SKNet (selective kernel net) is used to approximate the best receptive field, thus keeping the geometric shapes as the original image. And SENet (squeeze and extraction net) is used to automatically select the feature maps with strong contour information, enhancing the geometric features. Finally, the enhanced geometric features are fused with the initial resized image, to obtain the final resized images. (2) The proposed image geolocalization method embeds the above image resizer as a fronting layer of the descriptor extraction network. It not only enables the network to be compatible with arbitrary-sized input images but also enhances the geometric features that are crucial to the image geolocalization task. Moreover, the triplet attention mechanism is added after the first convolutional layer of the backbone network to optimize the utilization of geometric elements extracted by the first convolutional layer. Finally, the local features extracted by the backbone network are aggregated to form image descriptors for image geolocalization. The proposed method was evaluated on several mainstream datasets, such as Pittsburgh30K, Tokyo24/7, and Places365. The results show that the proposed method has excellent size compatibility and compares favorably to recently mainstream geolocalization methods.Keywords: image geolocalization, self-attention mechanism, image resizer, geometric feature
Procedia PDF Downloads 2144553 Solventless C−C Coupling of Low Carbon Furanics to High Carbon Fuel Precursors Using an Improved Graphene Oxide Carbocatalyst
Authors: Ashish Bohre, Blaž Likozar, Saikat Dutta, Dionisios G. Vlachos, Basudeb Saha
Abstract:
Graphene oxide, decorated with surface oxygen functionalities, has emerged as a sustainable alternative to precious metal catalysts for many reactions. Herein, we report for the first time that graphene oxide becomes super active for C-C coupling upon incorporation of multilayer crystalline features, highly oxidized surface, Brønsted acidic functionalities and defect sites on the surface and edges via modified oxidation. The resulting improved graphene oxide (IGO) demonstrates superior activity to commonly used framework zeolites for upgrading of low carbon biomass furanics to long carbon chain aviation fuel precursors. A maximum 95% yield of C15 fuel precursor with high selectivity is obtained at low temperature (60 C) and neat conditions via hydroxyalkylation/alkylation (HAA) of 2-methylfuran (2-MF) and furfural. The coupling of 2-MF with carbonyl molecules ranging from C3 to C6 produced the precursors of carbon numbers 12 to 21. The catalyst becomes inactive in the 4th cycle due to the loss of oxygen functionalities, defect sites and multilayer features; however, regains comparable activity upon regeneration. Extensive microscopic and spectroscopic characterization of the fresh and reused IGO is presented to elucidate high activity of IGO and to establish a correlation between activity and surface and structural properties. Kinetic Monte Carlo (KMC) and density functional theory (DFT) calculations are presented to further illustrate the surface features and the reaction mechanism.Keywords: methacrylic acid, itaconic acid, biomass, monomer, solid base catalyst
Procedia PDF Downloads 1734552 Orthosis and Finite Elements: A Study for Development of New Designs through Additive Manufacturing
Authors: M. Volpini, D. Alves, A. Horta, M. Borges, P. Reis
Abstract:
The gait pattern in people that present motor limitations foment the demand for auxiliary locomotion devices. These artifacts for movement assistance vary according to its shape, size and functional features, following the clinical applications desired. Among the ortheses of lower limbs, the ankle-foot orthesis aims to improve the ability to walk in people with different neuromuscular limitations, although they do not always answer patients' expectations for their aesthetic and functional characteristics. The purpose of this study is to explore the possibility of using new design in additive manufacturer to reproduce the shape and functional features of a ankle-foot orthesis in an efficient and modern way. Therefore, this work presents a study about the performance of the mechanical forces through the analysis of finite elements in an ankle-foot orthesis. It will be demonstrated a study of distribution of the stress on the orthopedic device in orthostatism and during the movement in the course of patient's walk.Keywords: additive manufacture, new designs, orthoses, finite elements
Procedia PDF Downloads 2114551 Detection of Clipped Fragments in Speech Signals
Authors: Sergei Aleinik, Yuri Matveev
Abstract:
In this paper a novel method for the detection of clipping in speech signals is described. It is shown that the new method has better performance than known clipping detection methods, is easy to implement, and is robust to changes in signal amplitude, size of data, etc. Statistical simulation results are presented.Keywords: clipping, clipped signal, speech signal processing, digital signal processing
Procedia PDF Downloads 3934550 Feature Selection for Production Schedule Optimization in Transition Mines
Authors: Angelina Anani, Ignacio Ortiz Flores, Haitao Li
Abstract:
The use of underground mining methods have increased significantly over the past decades. This increase has also been spared on by several mines transitioning from surface to underground mining. However, determining the transition depth can be a challenging task, especially when coupled with production schedule optimization. Several researchers have simplified the problem by excluding operational features relevant to production schedule optimization. Our research objective is to investigate the extent to which operational features of transition mines accounted for affect the optimal production schedule. We also provide a framework for factors to consider in production schedule optimization for transition mines. An integrated mixed-integer linear programming (MILP) model is developed that maximizes the NPV as a function of production schedule and transition depth. A case study is performed to validate the model, with a comparative sensitivity analysis to obtain operational insights.Keywords: underground mining, transition mines, mixed-integer linear programming, production schedule
Procedia PDF Downloads 169