Search results for: forest cover-type dataset
1214 Preliminary Result on the Impact of Anthropogenic Noise on Understory Bird Population in Primary Forest of Gaya Island
Authors: Emily A. Gilbert, Jephte Sompud, Andy R. Mojiol, Cynthia B. Sompud, Alim Biun
Abstract:
Gaya Island of Sabah is known for its wildlife and marine biodiversity. It has marks itself as one of the hot destinations of tourists from all around the world. Gaya Island tourism activities have contributed to Sabah’s economy revenue with the high number of tourists visiting the island. However, it has led to the increased anthropogenic noise derived from tourism activities. This may greatly interfere with the animals such as understory birds that rely on acoustic signals as a tool for communication. Many studies in other parts of the regions reveal that anthropogenic noise does decrease species richness of avian community. However, in Malaysia, published research regarding the impact of anthropogenic noise on the understory birds is still very lacking. This study was conducted in order to fill up this gap. This study aims to investigate the anthropogenic noise’s impact towards understory bird population. There were three sites within the Primary forest of Gaya Island that were chosen to sample the level of anthropogenic noise in relation to the understory bird population. Noise mapping method was used to measure the anthropogenic noise level and identify the zone with high anthropogenic noise level (> 60dB) and zone with low anthropogenic noise level (< 60dB) based on the standard threshold of noise level. The methods that were used for this study was solely mist netting and ring banding. This method was chosen as it can determine the diversity of the understory bird population in Gaya Island. The preliminary study was conducted from 15th to 26th April and 5th to 10th May 2015 whereby there were 2 mist nets that were set up at each of the zones within the selected site. The data was analyzed by using the descriptive analysis, presence and absence analysis, diversity indices and diversity t-test. Meanwhile, PAST software was used to analyze the obtain data. The results from this study present a total of 60 individuals that consisted of 12 species from 7 families of understory birds were recorded in three of the sites in Gaya Island. The Shannon-Wiener index shows that diversity of species in high anthropogenic noise zone and low anthropogenic noise zone were 1.573 and 2.009, respectively. However, the statistical analysis shows that there was no significant difference between these zones. Nevertheless, based on the presence and absence analysis, it shows that the species at the low anthropogenic noise zone was higher as compared to the high anthropogenic noise zone. Thus, this result indicates that there is an impact of anthropogenic noise on the population diversity of understory birds. There is still an urgent need to conduct an in-depth study by increasing the sample size in the selected sites in order to fully understand the impact of anthropogenic noise towards the understory birds population so that it can then be in cooperated into the wildlife management for a sustainable environment in Gaya Island.Keywords: anthropogenic noise, biodiversity, Gaya Island, understory bird
Procedia PDF Downloads 3651213 Automatic Segmentation of Lung Pleura Based On Curvature Analysis
Authors: Sasidhar B., Bhaskar Rao N., Ramesh Babu D. R., Ravi Shankar M.
Abstract:
Segmentation of lung pleura is a preprocessing step in Computer-Aided Diagnosis (CAD) which helps in reducing false positives in detection of lung cancer. The existing methods fail in extraction of lung regions with the nodules at the pleura of the lungs. In this paper, a new method is proposed which segments lung regions with nodules at the pleura of the lungs based on curvature analysis and morphological operators. The proposed algorithm is tested on 06 patient’s dataset which consists of 60 images of Lung Image Database Consortium (LIDC) and the results are found to be satisfactory with 98.3% average overlap measure (AΩ).Keywords: curvature analysis, image segmentation, morphological operators, thresholding
Procedia PDF Downloads 5961212 Geographical Information System and Multi-Criteria Based Approach to Locate Suitable Sites for Industries to Minimize Agriculture Land Use Changes in Bangladesh
Authors: Nazia Muhsin, Tofael Ahamed, Ryozo Noguchi, Tomohiro Takigawa
Abstract:
One of the most challenging issues to achieve sustainable development on food security is land use changes. The crisis of lands for agricultural production mainly arises from the unplanned transformation of agricultural lands to infrastructure development i.e. urbanization and industrialization. Land use without sustainability assessment could have impact on the food security and environmental protections. Bangladesh, as the densely populated country with limited arable lands is now facing challenges to meet sustainable food security. Agricultural lands are using for economic growth by establishing industries. The industries are spreading from urban areas to the suburban areas and using the agricultural lands. To minimize the agricultural land losses for unplanned industrialization, compact economic zones should be find out in a scientific approach. Therefore, the purpose of the study was to find out suitable sites for industrial growth by land suitability analysis (LSA) by using Geographical Information System (GIS) and multi-criteria analysis (MCA). The goal of the study was to emphases both agricultural lands and industries for sustainable development in land use. The study also attempted to analysis the agricultural land use changes in a suburban area by statistical data of agricultural lands and primary data of the existing industries of the study place. The criteria were selected as proximity to major roads, and proximity to local roads, distant to rivers, waterbodies, settlements, flood-flow zones, agricultural lands for the LSA. The spatial dataset for the criteria were collected from the respective departments of Bangladesh. In addition, the elevation spatial dataset were used from the SRTM (Shuttle Radar Topography Mission) data source. The criteria were further analyzed with factors and constraints in ArcGIS®. Expert’s opinion were applied for weighting the criteria according to the analytical hierarchy process (AHP), a multi-criteria technique. The decision rule was set by using ‘weighted overlay’ tool to aggregate the factors and constraints with the weights of the criteria. The LSA found only 5% of land was most suitable for industrial sites and few compact lands for industrial zones. The developed LSA are expected to help policy makers of land use and urban developers to ensure the sustainability of land uses and agricultural production.Keywords: AHP (analytical hierarchy process), GIS (geographic information system), LSA (land suitability analysis), MCA (multi-criteria analysis)
Procedia PDF Downloads 2631211 Radiomics: Approach to Enable Early Diagnosis of Non-Specific Breast Nodules in Contrast-Enhanced Magnetic Resonance Imaging
Authors: N. D'Amico, E. Grossi, B. Colombo, F. Rigiroli, M. Buscema, D. Fazzini, G. Cornalba, S. Papa
Abstract:
Purpose: To characterize, through a radiomic approach, the nature of nodules considered non-specific by expert radiologists, recognized in magnetic resonance mammography (MRm) with T1-weighted (T1w) sequences with paramagnetic contrast. Material and Methods: 47 cases out of 1200 undergoing MRm, in which the MRm assessment gave uncertain classification (non-specific nodules), were admitted to the study. The clinical outcome of the non-specific nodules was later found through follow-up or further exams (biopsy), finding 35 benign and 12 malignant. All MR Images were acquired at 1.5T, a first basal T1w sequence and then four T1w acquisitions after the paramagnetic contrast injection. After a manual segmentation of the lesions, done by a radiologist, and the extraction of 150 radiomic features (30 features per 5 subsequent times) a machine learning (ML) approach was used. An evolutionary algorithm (TWIST system based on KNN algorithm) was used to subdivide the dataset into training and validation test and to select features yielding the maximal amount of information. After this pre-processing, different machine learning systems were applied to develop a predictive model based on a training-testing crossover procedure. 10 cases with a benign nodule (follow-up older than 5 years) and 18 with an evident malignant tumor (clear malignant histological exam) were added to the dataset in order to allow the ML system to better learn from data. Results: NaiveBayes algorithm working on 79 features selected by a TWIST system, resulted to be the best performing ML system with a sensitivity of 96% and a specificity of 78% and a global accuracy of 87% (average values of two training-testing procedures ab-ba). The results showed that in the subset of 47 non-specific nodules, the algorithm predicted the outcome of 45 nodules which an expert radiologist could not identify. Conclusion: In this pilot study we identified a radiomic approach allowing ML systems to perform well in the diagnosis of a non-specific nodule at MR mammography. This algorithm could be a great support for the early diagnosis of malignant breast tumor, in the event the radiologist is not able to identify the kind of lesion and reduces the necessity for long follow-up. Clinical Relevance: This machine learning algorithm could be essential to support the radiologist in early diagnosis of non-specific nodules, in order to avoid strenuous follow-up and painful biopsy for the patient.Keywords: breast, machine learning, MRI, radiomics
Procedia PDF Downloads 2691210 Fire Risk Information Harmonization for Transboundary Fire Events between Portugal and Spain
Authors: Domingos Viegas, Miguel Almeida, Carmen Rocha, Ilda Novo, Yolanda Luna
Abstract:
Forest fires along the more than 1200km of the Spanish-Portuguese border are more and more frequent, currently achieving around 2000 fire events per year. Some of these events develop to large international wildfire requiring concerted operations based on shared information between the two countries. The fire event of Valencia de Alcantara (2003) causing several fatalities and more than 13000ha burnt, is a reference example of these international events. Currently, Portugal and Spain have a specific cross-border cooperation protocol on wildfires response for a strip of about 30km (15 km for each side). It is recognized by public authorities the successfulness of this collaboration however it is also assumed that this cooperation should include more functionalities such as the development of a common risk information system for transboundary fire events. Since Portuguese and Spanish authorities use different approaches to determine the fire risk indexes inputs and different methodologies to assess the fire risk, sometimes the conjoint firefighting operations are jeopardized since the information is not harmonized and the understanding of the situation by the civil protection agents from both countries is not unique. Thus, a methodology aiming the harmonization of the fire risk calculation and perception by Portuguese and Spanish Civil protection authorities is hereby presented. The final results are presented as well. The fire risk index used in this work is the Canadian Fire Weather Index (FWI), which is based on meteorological data. The FWI is limited on its application as it does not take into account other important factors with great effect on the fire appearance and development. The combination of these factors is very complex since, besides the meteorology, it addresses several parameters of different topics, namely: sociology, topography, vegetation and soil cover. Therefore, the meaning of FWI values is different from region to region, according the specific characteristics of each region. In this work, a methodology for FWI calibration based on the number of fire occurrences and on the burnt area in the transboundary regions of Portugal and Spain, in order to assess the fire risk based on calibrated FWI values, is proposed. As previously mentioned, the cooperative firefighting operations require a common perception of the information shared. Therefore, a common classification of the fire risk for the fire events occurred in the transboundary strip is proposed with the objective of harmonizing this type of information. This work is integrated in the ECHO project SpitFire - Spanish-Portuguese Meteorological Information System for Transboundary Operations in Forest Fires, which aims the development of a web platform for the sharing of information and supporting decision tools to be used in international fire events involving Portugal and Spain.Keywords: data harmonization, FWI, international collaboration, transboundary wildfires
Procedia PDF Downloads 2531209 Combining Shallow and Deep Unsupervised Machine Learning Techniques to Detect Bad Actors in Complex Datasets
Authors: Jun Ming Moey, Zhiyaun Chen, David Nicholson
Abstract:
Bad actors are often hard to detect in data that imprints their behaviour patterns because they are comparatively rare events embedded in non-bad actor data. An unsupervised machine learning framework is applied here to detect bad actors in financial crime datasets that record millions of transactions undertaken by hundreds of actors (<0.01% bad). Specifically, the framework combines ‘shallow’ (PCA, Isolation Forest) and ‘deep’ (Autoencoder) methods to detect outlier patterns. Detection performance analysis for both the individual methods and their combination is reported.Keywords: detection, machine learning, deep learning, unsupervised, outlier analysis, data science, fraud, financial crime
Procedia PDF Downloads 961208 TDApplied: An R Package for Machine Learning and Inference with Persistence Diagrams
Authors: Shael Brown, Reza Farivar
Abstract:
Persistence diagrams capture valuable topological features of datasets that other methods cannot uncover. Still, their adoption in data pipelines has been limited due to the lack of publicly available tools in R (and python) for analyzing groups of them with machine learning and statistical inference. In an easy-to-use and scalable R package called TDApplied, we implement several applied analysis methods tailored to groups of persistence diagrams. The two main contributions of our package are comprehensiveness (most functions do not have implementations elsewhere) and speed (shown through benchmarking against other R packages). We demonstrate applications of the tools on simulated data to illustrate how easily practical analyses of any dataset can be enhanced with topological information.Keywords: machine learning, persistence diagrams, R, statistical inference
Procedia PDF Downloads 871207 Optimizing Machine Learning Algorithms for Defect Characterization and Elimination in Liquids Manufacturing
Authors: Tolulope Aremu
Abstract:
The key process steps to produce liquid detergent products will introduce potential defects, such as formulation, mixing, filling, and packaging, which might compromise product quality, consumer safety, and operational efficiency. Real-time identification and characterization of such defects are of prime importance for maintaining high standards and reducing waste and costs. Usually, defect detection is performed by human inspection or rule-based systems, which is very time-consuming, inconsistent, and error-prone. The present study overcomes these limitations in dealing with optimization in defect characterization within the process for making liquid detergents using Machine Learning algorithms. Performance testing of various machine learning models was carried out: Support Vector Machine, Decision Trees, Random Forest, and Convolutional Neural Network on defect detection and classification of those defects like wrong viscosity, color deviations, improper filling of a bottle, packaging anomalies. These algorithms have significantly benefited from a variety of optimization techniques, including hyperparameter tuning and ensemble learning, in order to greatly improve detection accuracy while minimizing false positives. Equipped with a rich dataset of defect types and production parameters consisting of more than 100,000 samples, our study further includes information from real-time sensor data, imaging technologies, and historic production records. The results are that optimized machine learning models significantly improve defect detection compared to traditional methods. Take, for instance, the CNNs, which run at 98% and 96% accuracy in detecting packaging anomaly detection and bottle filling inconsistency, respectively, by fine-tuning the model with real-time imaging data, through which there was a reduction in false positives of about 30%. The optimized SVM model on detecting formulation defects gave 94% in viscosity variation detection and color variation. These values of performance metrics correspond to a giant leap in defect detection accuracy compared to the usual 80% level achieved up to now by rule-based systems. Moreover, this optimization with models can hasten defect characterization, allowing for detection time to be below 15 seconds from an average of 3 minutes using manual inspections with real-time processing of data. With this, the reduction in time will be combined with a 25% reduction in production downtime because of proactive defect identification, which can save millions annually in recall and rework costs. Integrating real-time machine learning-driven monitoring drives predictive maintenance and corrective measures for a 20% improvement in overall production efficiency. Therefore, the optimization of machine learning algorithms in defect characterization optimum scalability and efficiency for liquid detergent companies gives improved operational performance to higher levels of product quality. In general, this method could be conducted in several industries within the Fast moving consumer Goods industry, which would lead to an improved quality control process.Keywords: liquid detergent manufacturing, defect detection, machine learning, support vector machines, convolutional neural networks, defect characterization, predictive maintenance, quality control, fast-moving consumer goods
Procedia PDF Downloads 211206 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow
Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat
Abstract:
Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement
Procedia PDF Downloads 951205 Labile and Humified Carbon Storage in Natural and Anthropogenically Affected Luvisols
Authors: Kristina Amaleviciute, Ieva Jokubauskaite, Alvyra Slepetiene, Jonas Volungevicius, Inga Liaudanskiene
Abstract:
The main task of this research was to investigate the chemical composition of the differently used soil in profiles. To identify the differences in the soil were investigated organic carbon (SOC) and its fractional composition: dissolved organic carbon (DOC), mobile humic acids (MHA) and C to N ratio of natural and anthropogenically affected Luvisols. Research object: natural and anthropogenically affected Luvisol, Akademija, Kedainiai, distr. Lithuania. Chemical analyses were carried out at the Chemical Research Laboratory of Institute of Agriculture, LAMMC. Soil samples for chemical analyses were taken from the genetics soil horizons. SOC was determined by the Tyurin method modified by Nikitin, measuring with spectrometer Cary 50 (VARIAN) in 590 nm wavelength using glucose standards. For mobile humic acids (MHA) determination the extraction procedure was carried out using 0.1 M NaOH solution. Dissolved organic carbon (DOC) was analyzed using an ion chromatograph SKALAR. pH was measured in 1M H2O. N total was determined by Kjeldahl method. Results: Based on the obtained results, it can be stated that transformation of chemical composition is going through the genetic soil horizons. Morphology of the upper layers of soil profile which is formed under natural conditions was changed by anthropomorphic (agrogenic, urbogenic, technogenic and others) structure. Anthropogenic activities, mechanical and biochemical disturbances destroy the natural characteristics of soil formation and complicates the interpretation of soil development. Due to the intensive cultivation, the pH values of the curve equals (disappears acidification characteristic for E horizon) with natural Luvisol. Luvisols affected by agricultural activities was characterized by a decrease in the absolute amount of humic substances in separate horizons. But there was observed more sustainable, higher carbon sequestration and thicker storage of humic horizon compared with forest Luvisol. However, the average content of humic substances in the soil profile was lower. Soil organic carbon content in anthropogenic Luvisols was lower compared with the natural forest soil, but there was more evenly spread over in the wider thickness of accumulative horizon. These data suggest that the organization of geo-ecological declines and agroecological increases in Luvisols. Acknowledgement: This work was supported by the National Science Program ‘The effect of long-term, different-intensity management of resources on the soils of different genesis and on other components of the agro-ecosystems’ [grant number SIT-9/2015] funded by the Research Council of Lithuania.Keywords: agrogenization, dissolved organic carbon, luvisol, mobile humic acids, soil organic carbon
Procedia PDF Downloads 2361204 Facial Emotion Recognition Using Deep Learning
Authors: Ashutosh Mishra, Nikhil Goyal
Abstract:
A 3D facial emotion recognition model based on deep learning is proposed in this paper. Two convolution layers and a pooling layer are employed in the deep learning architecture. After the convolution process, the pooling is finished. The probabilities for various classes of human faces are calculated using the sigmoid activation function. To verify the efficiency of deep learning-based systems, a set of faces. The Kaggle dataset is used to verify the accuracy of a deep learning-based face recognition model. The model's accuracy is about 65 percent, which is lower than that of other facial expression recognition techniques. Despite significant gains in representation precision due to the nonlinearity of profound image representations.Keywords: facial recognition, computational intelligence, convolutional neural network, depth map
Procedia PDF Downloads 2311203 Net Interest Margin of Cooperative Banks in Low Interest Rate Environment
Authors: Karolína Vozková, Matěj Kuc
Abstract:
This paper deals with the impact of decrease in interest rates on the performance of commercial and cooperative banks in the Eurozone measured by net interest margin. The analysis was performed on balanced dataset of 268 commercial and 726 cooperative banks spanning the 2008-2015 period. We employed Fixed Effects estimation panel method. As expected, we found a negative relationship between market rates and net interest margin. Our results suggest that the impact of negative interest income differs across individual banking business models. More precisely, those cooperative banks were much more hit by the decrease of market interest rates which might be due to their ownership structure and more restrictive business regulation.Keywords: cooperative banks, performance, negative interest rates, risk management
Procedia PDF Downloads 1821202 Optimizing Communications Overhead in Heterogeneous Distributed Data Streams
Authors: Rashi Bhalla, Russel Pears, M. Asif Naeem
Abstract:
In this 'Information Explosion Era' analyzing data 'a critical commodity' and mining knowledge from vertically distributed data stream incurs huge communication cost. However, an effort to decrease the communication in the distributed environment has an adverse influence on the classification accuracy; therefore, a research challenge lies in maintaining a balance between transmission cost and accuracy. This paper proposes a method based on Bayesian inference to reduce the communication volume in a heterogeneous distributed environment while retaining prediction accuracy. Our experimental evaluation reveals that a significant reduction in communication can be achieved across a diverse range of dataset types.Keywords: big data, bayesian inference, distributed data stream mining, heterogeneous-distributed data
Procedia PDF Downloads 1611201 Cellular Traffic Prediction through Multi-Layer Hybrid Network
Authors: Supriya H. S., Chandrakala B. M.
Abstract:
Deep learning based models have been recently successful adoption for network traffic prediction. However, training a deep learning model for various prediction tasks is considered one of the critical tasks due to various reasons. This research work develops Multi-Layer Hybrid Network (MLHN) for network traffic prediction and analysis; MLHN comprises the three distinctive networks for handling the different inputs for custom feature extraction. Furthermore, an optimized and efficient parameter-tuning algorithm is introduced to enhance parameter learning. MLHN is evaluated considering the “Big Data Challenge” dataset considering the Mean Absolute Error, Root Mean Square Error and R^2as metrics; furthermore, MLHN efficiency is proved through comparison with a state-of-art approach.Keywords: MLHN, network traffic prediction
Procedia PDF Downloads 901200 Using Mining Methods of WEKA to Predict Quran Verb Tense and Aspect in Translations from Arabic to English: Experimental Results and Analysis
Authors: Jawharah Alasmari
Abstract:
In verb inflection, tense marks past/present/future action, and aspect marks progressive/continues perfect/completed actions. This usage and meaning of tense and aspect differ in Arabic and English. In this research, we applied data mining methods to test the predictive function of candidate features by using our dataset of Arabic verbs in-context, and their 7 translations. Weka machine learning classifiers is used in this experiment in order to examine the key features that can be used to provide guidance to enable a translator’s appropriate English translation of the Arabic verb tense and aspect.Keywords: Arabic verb, English translations, mining methods, Weka software
Procedia PDF Downloads 2721199 Predictive Modelling of Aircraft Component Replacement Using Imbalanced Learning and Ensemble Method
Authors: Dangut Maren David, Skaf Zakwan
Abstract:
Adequate monitoring of vehicle component in other to obtain high uptime is the goal of predictive maintenance, the major challenge faced by businesses in industries is the significant cost associated with a delay in service delivery due to system downtime. Most of those businesses are interested in predicting those problems and proactively prevent them in advance before it occurs, which is the core advantage of Prognostic Health Management (PHM) application. The recent emergence of industry 4.0 or industrial internet of things (IIoT) has led to the need for monitoring systems activities and enhancing system-to-system or component-to- component interactions, this has resulted to a large generation of data known as big data. Analysis of big data represents an increasingly important, however, due to complexity inherently in the dataset such as imbalance classification problems, it becomes extremely difficult to build a model with accurate high precision. Data-driven predictive modeling for condition-based maintenance (CBM) has recently drowned research interest with growing attention to both academics and industries. The large data generated from industrial process inherently comes with a different degree of complexity which posed a challenge for analytics. Thus, imbalance classification problem exists perversely in industrial datasets which can affect the performance of learning algorithms yielding to poor classifier accuracy in model development. Misclassification of faults can result in unplanned breakdown leading economic loss. In this paper, an advanced approach for handling imbalance classification problem is proposed and then a prognostic model for predicting aircraft component replacement is developed to predict component replacement in advanced by exploring aircraft historical data, the approached is based on hybrid ensemble-based method which improves the prediction of the minority class during learning, we also investigate the impact of our approach on multiclass imbalance problem. We validate the feasibility and effectiveness in terms of the performance of our approach using real-world aircraft operation and maintenance datasets, which spans over 7 years. Our approach shows better performance compared to other similar approaches. We also validate our approach strength for handling multiclass imbalanced dataset, our results also show good performance compared to other based classifiers.Keywords: prognostics, data-driven, imbalance classification, deep learning
Procedia PDF Downloads 1741198 Visualization-Based Feature Extraction for Classification in Real-Time Interaction
Authors: Ágoston Nagy
Abstract:
This paper introduces a method of using unsupervised machine learning to visualize the feature space of a dataset in 2D, in order to find most characteristic segments in the set. After dimension reduction, users can select clusters by manual drawing. Selected clusters are recorded into a data model that is used for later predictions, based on realtime data. Predictions are made with supervised learning, using Gesture Recognition Toolkit. The paper introduces two example applications: a semantic audio organizer for analyzing incoming sounds, and a gesture database organizer where gestural data (recorded by a Leap motion) is visualized for further manipulation.Keywords: gesture recognition, machine learning, real-time interaction, visualization
Procedia PDF Downloads 3541197 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves
Authors: Shengnan Chen, Shuhua Wang
Abstract:
Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves
Procedia PDF Downloads 2851196 Detect QOS Attacks Using Machine Learning Algorithm
Authors: Christodoulou Christos, Politis Anastasios
Abstract:
A large majority of users favoured to wireless LAN connection since it was so simple to use. A wireless network can be the target of numerous attacks. Class hijacking is a well-known attack that is fairly simple to execute and has significant repercussions on users. The statistical flow analysis based on machine learning (ML) techniques is a promising categorization methodology. In a given dataset, which in the context of this paper is a collection of components representing frames belonging to various flows, machine learning (ML) can offer a technique for identifying and characterizing structural patterns. It is possible to classify individual packets using these patterns. It is possible to identify fraudulent conduct, such as class hijacking, and take necessary action as a result. In this study, we explore a way to use machine learning approaches to thwart this attack.Keywords: wireless lan, quality of service, machine learning, class hijacking, EDCA remapping
Procedia PDF Downloads 611195 Enhancing Athlete Training using Real Time Pose Estimation with Neural Networks
Authors: Jeh Patel, Chandrahas Paidi, Ahmed Hambaba
Abstract:
Traditional methods for analyzing athlete movement often lack the detail and immediacy required for optimal training. This project aims to address this limitation by developing a Real-time human pose estimation system specifically designed to enhance athlete training across various sports. This system leverages the power of convolutional neural networks (CNNs) to provide a comprehensive and immediate analysis of an athlete’s movement patterns during training sessions. The core architecture utilizes dilated convolutions to capture crucial long-range dependencies within video frames. Combining this with the robust encoder-decoder architecture to further refine pose estimation accuracy. This capability is essential for precise joint localization across the diverse range of athletic poses encountered in different sports. Furthermore, by quantifying movement efficiency, power output, and range of motion, the system provides data-driven insights that can be used to optimize training programs. Pose estimation data analysis can also be used to develop personalized training plans that target specific weaknesses identified in an athlete’s movement patterns. To overcome the limitations posed by outdoor environments, the project employs strategies such as multi-camera configurations or depth sensing techniques. These approaches can enhance pose estimation accuracy in challenging lighting and occlusion scenarios, where pose estimation accuracy in challenging lighting and occlusion scenarios. A dataset is collected From the labs of Martin Luther King at San Jose State University. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing different poses, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced pose detection model and lays the groundwork for future innovations in assistive enhancement technologies.Keywords: computer vision, deep learning, human pose estimation, U-NET, CNN
Procedia PDF Downloads 581194 Pattern Recognition Search: An Advancement Over Interpolation Search
Authors: Shahpar Yilmaz, Yasir Nadeem, Syed A. Mehdi
Abstract:
Searching for a record in a dataset is always a frequent task for any data structure-related application. Hence, a fast and efficient algorithm for the approach has its importance in yielding the quickest results and enhancing the overall productivity of the company. Interpolation search is one such technique used to search through a sorted set of elements. This paper proposes a new algorithm, an advancement over interpolation search for the application of search over a sorted array. Pattern Recognition Search or PR Search (PRS), like interpolation search, is a pattern-based divide and conquer algorithm whose objective is to reduce the sample size in order to quicken the process and it does so by treating the array as a perfect arithmetic progression series and thereby deducing the key element’s position. We look to highlight some of the key drawbacks of interpolation search, which are accounted for in the Pattern Recognition Search.Keywords: array, complexity, index, sorting, space, time
Procedia PDF Downloads 2471193 Analysis of Spatial and Temporal Data Using Remote Sensing Technology
Authors: Kapil Pandey, Vishnu Goyal
Abstract:
Spatial and temporal data analysis is very well known in the field of satellite image processing. When spatial data are correlated with time, series analysis it gives the significant results in change detection studies. In this paper the GIS and Remote sensing techniques has been used to find the change detection using time series satellite imagery of Uttarakhand state during the years of 1990-2010. Natural vegetation, urban area, forest cover etc. were chosen as main landuse classes to study. Landuse/ landcover classes within several years were prepared using satellite images. Maximum likelihood supervised classification technique was adopted in this work and finally landuse change index has been generated and graphical models were used to present the changes.Keywords: GIS, landuse/landcover, spatial and temporal data, remote sensing
Procedia PDF Downloads 4331192 Diagnosis of Diabetes Using Computer Methods: Soft Computing Methods for Diabetes Detection Using Iris
Authors: Piyush Samant, Ravinder Agarwal
Abstract:
Complementary and Alternative Medicine (CAM) techniques are quite popular and effective for chronic diseases. Iridology is more than 150 years old CAM technique which analyzes the patterns, tissue weakness, color, shape, structure, etc. for disease diagnosis. The objective of this paper is to validate the use of iridology for the diagnosis of the diabetes. The suggested model was applied in a systemic disease with ocular effects. 200 subject data of 100 each diabetic and non-diabetic were evaluated. Complete procedure was kept very simple and free from the involvement of any iridologist. From the normalized iris, the region of interest was cropped. All 63 features were extracted using statistical, texture analysis, and two-dimensional discrete wavelet transformation. A comparison of accuracies of six different classifiers has been presented. The result shows 89.66% accuracy by the random forest classifier.Keywords: complementary and alternative medicine, classification, iridology, iris, feature extraction, disease prediction
Procedia PDF Downloads 4081191 Colored Image Classification Using Quantum Convolutional Neural Networks Approach
Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins
Abstract:
Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning
Procedia PDF Downloads 1301190 Application of Fuzzy Multiple Criteria Decision Making for Flooded Risk Region Selection in Thailand
Authors: Waraporn Wimuktalop
Abstract:
This research will select regions which are vulnerable to flooding in different level. Mathematical principles will be systematically and rationally utilized as a tool to solve problems of selection the regions. Therefore the method called Multiple Criteria Decision Making (MCDM) has been chosen by having two analysis standards, TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and AHP (Analytic Hierarchy Process). There are three criterions that have been considered in this research. The first criterion is climate which is the rainfall. The second criterion is geography which is the height above mean sea level. The last criterion is the land utilization which both forest and agriculture use. The study found that the South has the highest risk of flooding, then the East, the Centre, the North-East, the West and the North, respectively.Keywords: multiple criteria decision making, TOPSIS, analytic hierarchy process, flooding
Procedia PDF Downloads 2361189 Unraveling Language Contact through Syntactic Dynamics of ‘Also’ in Hong Kong and Britain English
Authors: Xu Zhang
Abstract:
This article unveils an indicator of language contact between English and Cantonese in one of the Outer Circle Englishes, Hong Kong (HK) English, through an empirical investigation into 1000 tokens from the Global Web-based English (GloWbE) corpus, employing frequency analysis and logistic regression analysis. It is perceived that Cantonese and general Chinese are contextually marked by an integral underlying thinking pattern. Chinese speakers exhibit a reliance on semantic context over syntactic rules and lexical forms. This linguistic trait carries over to their use of English, affording greater flexibility to formal elements in constructing English sentences. The study focuses on the syntactic positioning of the focusing subjunct ‘also’, a linguistic element used to add new or contrasting prominence to specific sentence constituents. The English language generally allows flexibility in the relative position of 'also’, while there is a preference for close marking relationships. This article shifts attention to Hong Kong, where Cantonese and English converge, and 'also' finds counterparts in Cantonese ‘jaa’ and Mandarin ‘ye’. Employing a corpus-based data-driven method, we investigate the syntactic position of 'also' in both HK and GB English. The study aims to ascertain whether HK English exhibits a greater 'syntactic freedom,' allowing for a more distant marking relationship with 'also' compared to GB English. The analysis involves a random extraction of 500 samples from both HK and GB English from the GloWbE corpus, forming a dataset (N=1000). Exclusions are made for cases where 'also' functions as an additive conjunct or serves as a copulative adverb, as well as sentences lacking sufficient indication that 'also' functions as a focusing particle. The final dataset comprises 820 tokens, with 416 for GB and 404 for HK, annotated according to the focused constituent and the relative position of ‘also’. Frequency analysis reveals significant differences in the relative position of 'also' and marking relationships between HK and GB English. Regression analysis indicates a preference in HK English for a distant marking relationship between 'also' and its focused constituent. Notably, the subject and other constituents emerge as significant predictors of a distant position for 'also.' Together, these findings underscore the nuanced linguistic dynamics in HK English and contribute to our understanding of language contact. It suggests that future pedagogical practice should consider incorporating the syntactic variation within English varieties, facilitating leaners’ effective communication in diverse English-speaking environments and enhancing their intercultural communication competence.Keywords: also, Cantonese, English, focus marker, frequency analysis, language contact, logistic regression analysis
Procedia PDF Downloads 561188 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs
Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.
Abstract:
Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification
Procedia PDF Downloads 1271187 A Summary-Based Text Classification Model for Graph Attention Networks
Authors: Shuo Liu
Abstract:
In Chinese text classification tasks, redundant words and phrases can interfere with the formation of extracted and analyzed text information, leading to a decrease in the accuracy of the classification model. To reduce irrelevant elements, extract and utilize text content information more efficiently and improve the accuracy of text classification models. In this paper, the text in the corpus is first extracted using the TextRank algorithm for abstraction, the words in the abstract are used as nodes to construct a text graph, and then the graph attention network (GAT) is used to complete the task of classifying the text. Testing on a Chinese dataset from the network, the classification accuracy was improved over the direct method of generating graph structures using text.Keywords: Chinese natural language processing, text classification, abstract extraction, graph attention network
Procedia PDF Downloads 1021186 Radar on Bike: Coarse Classification based on Multi-Level Clustering for Cyclist Safety Enhancement
Authors: Asma Omri, Noureddine Benothman, Sofiane Sayahi, Fethi Tlili, Hichem Besbes
Abstract:
Cycling, a popular mode of transportation, can also be perilous due to cyclists' vulnerability to collisions with vehicles and obstacles. This paper presents an innovative cyclist safety system based on radar technology designed to offer real-time collision risk warnings to cyclists. The system incorporates a low-power radar sensor affixed to the bicycle and connected to a microcontroller. It leverages radar point cloud detections, a clustering algorithm, and a supervised classifier. These algorithms are optimized for efficiency to run on the TI’s AWR 1843 BOOST radar, utilizing a coarse classification approach distinguishing between cars, trucks, two-wheeled vehicles, and other objects. To enhance the performance of clustering techniques, we propose a 2-Level clustering approach. This approach builds on the state-of-the-art Density-based spatial clustering of applications with noise (DBSCAN). The objective is to first cluster objects based on their velocity, then refine the analysis by clustering based on position. The initial level identifies groups of objects with similar velocities and movement patterns. The subsequent level refines the analysis by considering the spatial distribution of these objects. The clusters obtained from the first level serve as input for the second level of clustering. Our proposed technique surpasses the classical DBSCAN algorithm in terms of geometrical metrics, including homogeneity, completeness, and V-score. Relevant cluster features are extracted and utilized to classify objects using an SVM classifier. Potential obstacles are identified based on their velocity and proximity to the cyclist. To optimize the system, we used the View of Delft dataset for hyperparameter selection and SVM classifier training. The system's performance was assessed using our collected dataset of radar point clouds synchronized with a camera on an Nvidia Jetson Nano board. The radar-based cyclist safety system is a practical solution that can be easily installed on any bicycle and connected to smartphones or other devices, offering real-time feedback and navigation assistance to cyclists. We conducted experiments to validate the system's feasibility, achieving an impressive 85% accuracy in the classification task. This system has the potential to significantly reduce the number of accidents involving cyclists and enhance their safety on the road.Keywords: 2-level clustering, coarse classification, cyclist safety, warning system based on radar technology
Procedia PDF Downloads 831185 An Approach for Reducing Morphological Operator Dataset and Recognize Optical Character Based on Significant Features
Authors: Ashis Pradhan, Mohan P. Pradhan
Abstract:
Pattern Matching is useful for recognizing character in a digital image. OCR is one such technique which reads character from a digital image and recognizes them. Line segmentation is initially used for identifying character in an image and later refined by morphological operations like binarization, erosion, thinning, etc. The work discusses a recognition technique that defines a set of morphological operators based on its orientation in a character. These operators are further categorized into groups having similar shape but different orientation for efficient utilization of memory. Finally the characters are recognized in accordance with the occurrence of frequency in hierarchy of significant pattern of those morphological operators and by comparing them with the existing database of each character.Keywords: binary image, morphological patterns, frequency count, priority, reduction data set and recognition
Procedia PDF Downloads 415