Search results for: vector quantization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1145

Search results for: vector quantization

245 Numerical Studies for Standard Bi-Conjugate Gradient Stabilized Method and the Parallel Variants for Solving Linear Equations

Authors: Kuniyoshi Abe

Abstract:

Bi-conjugate gradient (Bi-CG) is a well-known method for solving linear equations Ax = b, for x, where A is a given n-by-n matrix, and b is a given n-vector. Typically, the dimension of the linear equation is high and the matrix is sparse. A number of hybrid Bi-CG methods such as conjugate gradient squared (CGS), Bi-CG stabilized (Bi-CGSTAB), BiCGStab2, and BiCGstab(l) have been developed to improve the convergence of Bi-CG. Bi-CGSTAB has been most often used for efficiently solving the linear equation, but we have seen the convergence behavior with a long stagnation phase. In such cases, it is important to have Bi-CG coefficients that are as accurate as possible, and the stabilization strategy, which stabilizes the computation of the Bi-CG coefficients, has been proposed. It may avoid stagnation and lead to faster computation. Motivated by a large number of processors in present petascale high-performance computing hardware, the scalability of Krylov subspace methods on parallel computers has recently become increasingly prominent. The main bottleneck for efficient parallelization is the inner products which require a global reduction. The resulting global synchronization phases cause communication overhead on parallel computers. The parallel variants of Krylov subspace methods reducing the number of global communication phases and hiding the communication latency have been proposed. However, the numerical stability, specifically, the convergence speed of the parallel variants of Bi-CGSTAB may become worse than that of the standard Bi-CGSTAB. In this paper, therefore, we compare the convergence speed between the standard Bi-CGSTAB and the parallel variants by numerical experiments and show that the convergence speed of the standard Bi-CGSTAB is faster than the parallel variants. Moreover, we propose the stabilization strategy for the parallel variants.

Keywords: bi-conjugate gradient stabilized method, convergence speed, Krylov subspace methods, linear equations, parallel variant

Procedia PDF Downloads 165
244 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data

Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L. Duan

Abstract:

The conditional density characterizes the distribution of a response variable y given other predictor x and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts as a motivating starting point. In this work, the authors extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zₚ, zₙ]. The zₚ component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zₙ component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach coined Augmented Posterior CDE (AP-CDE) only requires a simple modification of the common normalizing flow framework while significantly improving the interpretation of the latent component since zₚ represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of 𝑥-related variations due to factors such as lighting condition and subject id from the other random variations. Further, the experiments show that an unconditional NF neural network based on an unsupervised model of z, such as a Gaussian mixture, fails to generate interpretable results.

Keywords: conditional density estimation, image generation, normalizing flow, supervised dimension reduction

Procedia PDF Downloads 99
243 Seismic Vulnerability Analysis of Arch Dam Based on Response Surface Method

Authors: Serges Mendomo Meye, Li Guowei, Shen Zhenzhong

Abstract:

Earthquake is one of the main loads threatening dam safety. Once the dam is damaged, it will bring huge losses of life and property to the country and people. Therefore, it is very important to research the seismic safety of the dam. Due to the complex foundation conditions, high fortification intensity, and high scientific and technological content, it is necessary to adopt reasonable methods to evaluate the seismic safety performance of concrete arch dams built and under construction in strong earthquake areas. Structural seismic vulnerability analysis can predict the probability of structural failure at all levels under different intensity earthquakes, which can provide a scientific basis for reasonable seismic safety evaluation and decision-making. In this paper, the response surface method (RSM) is applied to the seismic vulnerability analysis of arch dams, which improves the efficiency of vulnerability analysis. Based on the central composite test design method, the material-seismic intensity samples are established. The response surface model (RSM) with arch crown displacement as performance index is obtained by finite element (FE) calculation of the samples, and then the accuracy of the response surface model (RSM) is verified. To obtain the seismic vulnerability curves, the seismic intensity measure ??(?1) is chosen to be 0.1~1.2g, with an interval of 0.1g and a total of 12 intensity levels. For each seismic intensity level, the arch crown displacement corresponding to 100 sets of different material samples can be calculated by algebraic operation of the response surface model (RSM), which avoids 1200 times of nonlinear dynamic calculation of arch dam; thus, the efficiency of vulnerability analysis is improved greatly.

Keywords: high concrete arch dam, performance index, response surface method, seismic vulnerability analysis, vector-valued intensity measure

Procedia PDF Downloads 241
242 Adaptive Energy-Aware Routing (AEAR) for Optimized Performance in Resource-Constrained Wireless Sensor Networks

Authors: Innocent Uzougbo Onwuegbuzie

Abstract:

Wireless Sensor Networks (WSNs) are crucial for numerous applications, yet they face significant challenges due to resource constraints such as limited power and memory. Traditional routing algorithms like Dijkstra, Ad hoc On-Demand Distance Vector (AODV), and Bellman-Ford, while effective in path establishment and discovery, are not optimized for the unique demands of WSNs due to their large memory footprint and power consumption. This paper introduces the Adaptive Energy-Aware Routing (AEAR) model, a solution designed to address these limitations. AEAR integrates reactive route discovery, localized decision-making using geographic information, energy-aware metrics, and dynamic adaptation to provide a robust and efficient routing strategy. We present a detailed comparative analysis using a dataset of 50 sensor nodes, evaluating power consumption, memory footprint, and path cost across AEAR, Dijkstra, AODV, and Bellman-Ford algorithms. Our results demonstrate that AEAR significantly reduces power consumption and memory usage while optimizing path weight. This improvement is achieved through adaptive mechanisms that balance energy efficiency and link quality, ensuring prolonged network lifespan and reliable communication. The AEAR model's superior performance underlines its potential as a viable routing solution for energy-constrained WSN environments, paving the way for more sustainable and resilient sensor network deployments.

Keywords: wireless sensor networks (WSNs), adaptive energy-aware routing (AEAR), routing algorithms, energy, efficiency, network lifespan

Procedia PDF Downloads 39
241 Non-Destructive Static Damage Detection of Structures Using Genetic Algorithm

Authors: Amir Abbas Fatemi, Zahra Tabrizian, Kabir Sadeghi

Abstract:

To find the location and severity of damage that occurs in a structure, characteristics changes in dynamic and static can be used. The non-destructive techniques are more common, economic, and reliable to detect the global or local damages in structures. This paper presents a non-destructive method in structural damage detection and assessment using GA and static data. Thus, a set of static forces is applied to some of degrees of freedom and the static responses (displacements) are measured at another set of DOFs. An analytical model of the truss structure is developed based on the available specification and the properties derived from static data. The damages in structure produce changes to its stiffness so this method used to determine damage based on change in the structural stiffness parameter. Changes in the static response which structural damage caused choose to produce some simultaneous equations. Genetic Algorithms are powerful tools for solving large optimization problems. Optimization is considered to minimize objective function involve difference between the static load vector of damaged and healthy structure. Several scenarios defined for damage detection (single scenario and multiple scenarios). The static damage identification methods have many advantages, but some difficulties still exist. So it is important to achieve the best damage identification and if the best result is obtained it means that the method is Reliable. This strategy is applied to a plane truss. This method is used for a plane truss. Numerical results demonstrate the ability of this method in detecting damage in given structures. Also figures show damage detections in multiple damage scenarios have really efficient answer. Even existence of noise in the measurements doesn’t reduce the accuracy of damage detections method in these structures.

Keywords: damage detection, finite element method, static data, non-destructive, genetic algorithm

Procedia PDF Downloads 237
240 Analysis of Real Time Seismic Signal Dataset Using Machine Learning

Authors: Sujata Kulkarni, Udhav Bhosle, Vijaykumar T.

Abstract:

Due to the closeness between seismic signals and non-seismic signals, it is vital to detect earthquakes using conventional methods. In order to distinguish between seismic events and non-seismic events depending on their amplitude, our study processes the data that come from seismic sensors. The authors suggest a robust noise suppression technique that makes use of a bandpass filter, an IIR Wiener filter, recursive short-term average/long-term average (STA/LTA), and Carl short-term average (STA)/long-term average for event identification (LTA). The trigger ratio used in the proposed study to differentiate between seismic and non-seismic activity is determined. The proposed work focuses on significant feature extraction for machine learning-based seismic event detection. This serves as motivation for compiling a dataset of all features for the identification and forecasting of seismic signals. We place a focus on feature vector dimension reduction techniques due to the temporal complexity. The proposed notable features were experimentally tested using a machine learning model, and the results on unseen data are optimal. Finally, a presentation using a hybrid dataset (captured by different sensors) demonstrates how this model may also be employed in a real-time setting while lowering false alarm rates. The planned study is based on the examination of seismic signals obtained from both individual sensors and sensor networks (SN). A wideband seismic signal from BSVK and CUKG station sensors, respectively located near Basavakalyan, Karnataka, and the Central University of Karnataka, makes up the experimental dataset.

Keywords: Carl STA/LTA, features extraction, real time, dataset, machine learning, seismic detection

Procedia PDF Downloads 127
239 Numerical Studies on Thrust Vectoring Using Shock-Induced Self Impinging Secondary Jets

Authors: S. Vignesh, N. Vishnu, S. Vigneshwaran, M. Vishnu Anand, Dinesh Kumar Babu, V. R. Sanal Kumar

Abstract:

The study of the primary flow velocity and the self impinging secondary jet flow mixing is important from both the fundamental research and the application point of view. Real industrial configurations are more complex than simple shear layers present in idealized numerical thrust-vectoring models due to the presence of combustion, swirl and confinement. Predicting the flow features of self impinging secondary jets in a supersonic primary flow is complex owing to the fact that there are a large number of parameters involved. Earlier studies have been highlighted several key features of self impinging jets, but an extensive characterization in terms of jet interaction between supersonic flow and self impinging secondary sonic jets is still an active research topic. In this paper numerical studies have been carried out using a validated two-dimensional k-omega standard turbulence model for the design optimization of a thrust vector control system using shock induced self impinging secondary flow sonic jets using non-reacting flows. Efforts have been taken for examining the flow features of TVC system with various secondary jets at different divergent locations and jet impinging angles with the same inlet jet pressure and mass flow ratio. The results from the parametric studies reveal that in addition to the primary to the secondary mass flow ratio the characteristics of the self impinging secondary jets having bearing on an efficient thrust vectoring. We concluded that the self impinging secondary jet nozzles are better than single jet nozzle with the same secondary mass flow rate owing to the fact fixing of the self impinging secondary jet nozzles with proper jet angle could facilitate better thrust vectoring for any supersonic aerospace vehicle.

Keywords: fluidic thrust vectoring, rocket steering, supersonic to sonic jet interaction, TVC in aerospace vehicles

Procedia PDF Downloads 590
238 Advances of Image Processing in Precision Agriculture: Using Deep Learning Convolution Neural Network for Soil Nutrient Classification

Authors: Halimatu S. Abdullahi, Ray E. Sheriff, Fatima Mahieddine

Abstract:

Agriculture is essential to the continuous existence of human life as they directly depend on it for the production of food. The exponential rise in population calls for a rapid increase in food with the application of technology to reduce the laborious work and maximize production. Technology can aid/improve agriculture in several ways through pre-planning and post-harvest by the use of computer vision technology through image processing to determine the soil nutrient composition, right amount, right time, right place application of farm input resources like fertilizers, herbicides, water, weed detection, early detection of pest and diseases etc. This is precision agriculture which is thought to be solution required to achieve our goals. There has been significant improvement in the area of image processing and data processing which has being a major challenge. A database of images is collected through remote sensing, analyzed and a model is developed to determine the right treatment plans for different crop types and different regions. Features of images from vegetations need to be extracted, classified, segmented and finally fed into the model. Different techniques have been applied to the processes from the use of neural network, support vector machine, fuzzy logic approach and recently, the most effective approach generating excellent results using the deep learning approach of convolution neural network for image classifications. Deep Convolution neural network is used to determine soil nutrients required in a plantation for maximum production. The experimental results on the developed model yielded results with an average accuracy of 99.58%.

Keywords: convolution, feature extraction, image analysis, validation, precision agriculture

Procedia PDF Downloads 318
237 Comparison of Deep Learning and Machine Learning Algorithms to Diagnose and Predict Breast Cancer

Authors: F. Ghazalnaz Sharifonnasabi, Iman Makhdoom

Abstract:

Breast cancer is a serious health concern that affects many people around the world. According to a study published in the Breast journal, the global burden of breast cancer is expected to increase significantly over the next few decades. The number of deaths from breast cancer has been increasing over the years, but the age-standardized mortality rate has decreased in some countries. It’s important to be aware of the risk factors for breast cancer and to get regular check- ups to catch it early if it does occur. Machin learning techniques have been used to aid in the early detection and diagnosis of breast cancer. These techniques, that have been shown to be effective in predicting and diagnosing the disease, have become a research hotspot. In this study, we consider two deep learning approaches including: Multi-Layer Perceptron (MLP), and Convolutional Neural Network (CNN). We also considered the five-machine learning algorithm titled: Decision Tree (C4.5), Naïve Bayesian (NB), Support Vector Machine (SVM), K-Nearest Neighbors (KNN) Algorithm and XGBoost (eXtreme Gradient Boosting) on the Breast Cancer Wisconsin Diagnostic dataset. We have carried out the process of evaluating and comparing classifiers involving selecting appropriate metrics to evaluate classifier performance and selecting an appropriate tool to quantify this performance. The main purpose of the study is predicting and diagnosis breast cancer, applying the mentioned algorithms and also discovering of the most effective with respect to confusion matrix, accuracy and precision. It is realized that CNN outperformed all other classifiers and achieved the highest accuracy (0.982456). The work is implemented in the Anaconda environment based on Python programing language.

Keywords: breast cancer, multi-layer perceptron, Naïve Bayesian, SVM, decision tree, convolutional neural network, XGBoost, KNN

Procedia PDF Downloads 78
236 Epidemiological Survey on Tick-Borne Pathogens with Zoonotic Potential in Dog Populations of Southern Ethiopia

Authors: Hana Tadesse, Marika Grillini, Giulia Simonato, Alessandra Mondin, Giorgia Dotto, Antonio Frangipane Di Regalbono, Bersissa Kumsa, Rudi Cassini, Maria Luisa Menandro

Abstract:

Dogs are known to host several tick-borne pathogens with zoonotic potential; however, scant information is available on the epidemiology of these pathogens in low-income tropical coun- tries and in particular in sub-Saharan Africa. With the aim of investigating a wide range of tick- borne pathogens (i.e., Rickettsia spp., Anaplasma spp., Erhlichia spp., Borrelia spp., Hepatozoon spp. and Babesia spp.), 273 blood samples were collected from dogs in selected districts of Ethiopia and analyzed by real-time and/or end-point PCR. The results of the study showed that Hepatozoon canis was the most prevalent pathogen (53.8%), followed by Anaplasma phagocythophilum (7.0%), Babesia canis rossi (3.3%), Ehrlichia canis (2.6%) and Anaplasma platys (2.2%). Furthermore, five samples tested positive for Borrelia spp., identified as Borrelia afzelii (n = 3) and Borrelia burgdorferi (n = 2), and two samples for Rickettsia spp., identified as Rickettsia conorii (n = 1) and Rickettsia monacensis (n = 1). The finding of Anaplasma phagocythophilum and different species of the genera Borrelia and Rickettsia with zoonotic potential was unexpected and alarming, and calls for further investigation on the roles of dogs and on the tick, species acting as vector in this specific context. Other pathogens (Hepatozoon canis, Babaesia canis rossi, Anaplasma platys, Ehrlichia canis) are already known to have an important impact on the dogs’ health but have minor zoonotic potential as they were rarely or never reported in humans. Dogs from rural areas were found to be at higher risk for different pathogens, probably due to the presence of other wild canids in the same environment. The findings of the present study contribute to a better knowledge of the epidemiology of tick-borne pathogens, which is relevant to human and animal health.

Keywords: Dogs, Tick-borne pathogens, Africa, Ethiopia

Procedia PDF Downloads 96
235 Dynamic and Thermal Characteristics of Three-Dimensional Turbulent Offset Jet

Authors: Ali Assoudi, Sabra Habli, Nejla Mahjoub Saïd, Philippe Bournot, Georges Le Palec

Abstract:

Studying the flow characteristics of a turbulent offset jet is an important topic among researchers across the world because of its various engineering applications. Some of the common examples include: injection and carburetor systems, entrainment and mixing process in gas turbine and boiler combustion chambers, Thrust-augmenting ejectors for V/STOL aircrafts and HVAC systems, environmental dischargers, film cooling and many others. An offset jet is formed when a jet discharges into a medium above a horizontal solid wall parallel to the axis of the jet exit but which is offset by a certain distance. The structure of a turbulent offset-jet can be described by three main regions. Close to the nozzle exit, an offset jet possesses characteristic features similar to those of free jets. Then, the entrainment of fluid between the jet, the offset wall and the bottom wall creates a low pressure zone, forcing the jet to deflect towards the wall and eventually attaches to it at the impingement point. This is referred to as the Coanda effect. Further downstream after the reattachment point, the offset jet has the characteristics of a wall jet flow. Therefore, the offset jet has characteristics of free, impingement and wall jets, and it is relatively more complex compared to these types of flows. The present study examines the dynamic and thermal evolution of a 3D turbulent offset jet with different offset height ratio (the ratio of the distance from the jet exit to the impingement bottom wall and the jet nozzle diameter). To achieve this purpose a numerical study was conducted to investigate a three-dimensional offset jet flow through the resolution of the different governing Navier–Stokes’ equations by means of the finite volume method and the RSM second-order turbulent closure model. A detailed discussion has been provided on the flow and thermal characteristics in the form of streamlines, mean velocity vector, pressure field and Reynolds stresses.

Keywords: offset jet, offset ratio, numerical simulation, RSM

Procedia PDF Downloads 304
234 The Associations of Pes Planus Plantaris (Flat Foot) to the Postural Stability of Basketball Student-Athletes Through the Ground Reaction Force Vector (vGRF)

Authors: Def Primal, Sasanty Kusumaningtyas, Ermita I. Ibrahim

Abstract:

Purpose: The main objective of this study is to determine the pes planus plantaris (flat foot) condition can contribute to the disturbance of postural stability in basketball athletes in static and dynamic activities. Methods: This cross-sectional quantitative analytical retrospective study on 47 subjects of basketball student-athletes identified the foot arch index by extensive footprint area and AMTI (Advanced Mechanical Technology Inc.) Force flat-form (force plate) determined their postural stability. Subjects were conducted in three activities (static, dynamic vertical jump, and dynamic loading response) for ground reaction force (GRF) resultant vectors towards the vertical plane of body mass (W). Results Analytical results obtained that 80.9% of subjects had pes planus plantaris. It shows no significant differences in pes planus plantaris incidence in both sexes subject (p>0.005); however, there are differences in athlete’s exercise period aspect. Athlete students who have practiced strictly for more than four years’ experience over 50% of pes planus plantaris; furthermore, a long period of exercise was believed to stimulate pes planus. The average value of GRF vectors of pes planus plantaris subjects on three different basketball movements shows a significant correlation to postural stability. Conclusions Pes planus plantaris affected almost basketball athletes regarding the length and intensity of exercise performed. The condition significantly contributes to postural stability disturbance on a static condition, dynamic vertical jump, and dynamic vertical jump loading response.

Keywords: pes planus plantaris, flatfoot, ground reaction force, static and dynamic stability

Procedia PDF Downloads 144
233 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data

Authors: K. Sathishkumar, V. Thiagarasu

Abstract:

Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.

Keywords: microarray technology, gene expression data, clustering, gene Selection

Procedia PDF Downloads 324
232 Waters Colloidal Phase Extraction and Preconcentration: Method Comparison

Authors: Emmanuelle Maria, Pierre Crançon, Gaëtane Lespes

Abstract:

Colloids are ubiquitous in the environment and are known to play a major role in enhancing the transport of trace elements, thus being an important vector for contaminants dispersion. Colloids study and characterization are necessary to improve our understanding of the fate of pollutants in the environment. However, in stream water and groundwater, colloids are often very poorly concentrated. It is therefore necessary to pre-concentrate colloids in order to get enough material for analysis, while preserving their initial structure. Many techniques are used to extract and/or pre-concentrate the colloidal phase from bulk aqueous phase, but yet there is neither reference method nor estimation of the impact of these different techniques on the colloids structure, as well as the bias introduced by the separation method. In the present work, we have tested and compared several methods of colloidal phase extraction/pre-concentration, and their impact on colloids properties, particularly their size distribution and their elementary composition. Ultrafiltration methods (frontal, tangential and centrifugal) have been considered since they are widely used for the extraction of colloids in natural waters. To compare these methods, a ‘synthetic groundwater’ was used as a reference. The size distribution (obtained by Field-Flow Fractionation (FFF)) and the chemical composition of the colloidal phase (obtained by Inductively Coupled Plasma Mass Spectrometry (ICPMS) and Total Organic Carbon analysis (TOC)) were chosen as comparison factors. In this way, it is possible to estimate the pre-concentration impact on the colloidal phase preservation. It appears that some of these methods preserve in a more efficient manner the colloidal phase composition while others are easier/faster to use. The choice of the extraction/pre-concentration method is therefore a compromise between efficiency (including speed and ease of use) and impact on the structural and chemical composition of the colloidal phase. In perspective, the use of these methods should enhance the consideration of colloidal phase in the transport of pollutants in environmental assessment studies and forensics.

Keywords: chemical composition, colloids, extraction, preconcentration methods, size distribution

Procedia PDF Downloads 217
231 Epidemiological Study on Prevalence of Bovine Trypanosomosis and Tsetse Fly Density in Some Selected of Pastoral Areas of South Omo Zone

Authors: Tekle Olbamo, Tegegn Tesfaye, Dikaso Unbushe, Belete Jorga

Abstract:

Bovine trypanosomosis is a haemoprotozoan parasitic disease, mostly transmitted by the tsetse fly (Glossina species) and poses significant losses to the livestock industry in pastoral and agro-pastoral areas. Therefore, the current study was aimed to determine the prevalence of bovine trypanosomosis and its vectorial density in some selected tsetse suppression and non-tsetse suppression areas of South Omo Zonefrom December 2018- November 2019. Dark phase contrast buffy coat, hematocrit techniques, and thin blood smear method were used for determination of prevalence and packed cell volume of trypanosomosis infection, respectively. For entomological investigation, 96 NGU traps were deployed (64 traps in tsetse suppression areas, 32 traps in tsetse non-suppression areas) in vector breeding areas. The overall prevalence of bovine trypanosomosis was 11.05% (142/1284), and overall seasonal prevalence of disease was 14.33% (92/642) and 7.78% (50/642) for dry and wet seasons, respectively. There was a statistically significant difference (P <0.05) in disease prevalence between the two seasons. Trypanosomacongolensewas the dominant parasite species; 80% and 71.64%, followed by Trypanosomavivax. Overall mean packed cell volume indicated parasitaemic animals (23.57±3.13) had significantly lower PCV than aparasitaemic animals (27.80±4.95), and animals examined during dry season (26.22±4.37) had lower mean PCV than animals examined during wet season with the significant association. Entomological study result revealed a total of 2.64 F/T/D and 2.03 F/T/D respectively from tsetse suppression areas and tsetse non-suppression areas during dry season and 0.42 F/T/D and 0.56 F/T/D during the wet season. Glossinapallidipes was the only cyclical vectors collected and identified from current study areas along with numerous mechanical vectors of genus Tabanus, Stomoxys, and Haematopota. Therefore integrated and safe control and prevention effort should be engaged to uphold cattle production and productivity in the area.

Keywords: bovine trypanosomiasis, South Omo, tsetse fly density, epidemiological study

Procedia PDF Downloads 163
230 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores

Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan

Abstract:

Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.

Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics

Procedia PDF Downloads 130
229 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs

Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa

Abstract:

Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.

Keywords: classification models, egg weight, fertilised eggs, multiple linear regression

Procedia PDF Downloads 88
228 A Machine Learning Approach for Detecting and Locating Hardware Trojans

Authors: Kaiwen Zheng, Wanting Zhou, Nan Tang, Lei Li, Yuanhang He

Abstract:

The integrated circuit industry has become a cornerstone of the information society, finding widespread application in areas such as industry, communication, medicine, and aerospace. However, with the increasing complexity of integrated circuits, Hardware Trojans (HTs) implanted by attackers have become a significant threat to their security. In this paper, we proposed a hardware trojan detection method for large-scale circuits. As HTs introduce physical characteristic changes such as structure, area, and power consumption as additional redundant circuits, we proposed a machine-learning-based hardware trojan detection method based on the physical characteristics of gate-level netlists. This method transforms the hardware trojan detection problem into a machine-learning binary classification problem based on physical characteristics, greatly improving detection speed. To address the problem of imbalanced data, where the number of pure circuit samples is far less than that of HTs circuit samples, we used the SMOTETomek algorithm to expand the dataset and further improve the performance of the classifier. We used three machine learning algorithms, K-Nearest Neighbors, Random Forest, and Support Vector Machine, to train and validate benchmark circuits on Trust-Hub, and all achieved good results. In our case studies based on AES encryption circuits provided by trust-hub, the test results showed the effectiveness of the proposed method. To further validate the method’s effectiveness for detecting variant HTs, we designed variant HTs using open-source HTs. The proposed method can guarantee robust detection accuracy in the millisecond level detection time for IC, and FPGA design flows and has good detection performance for library variant HTs.

Keywords: hardware trojans, physical properties, machine learning, hardware security

Procedia PDF Downloads 148
227 Investigation of Different Machine Learning Algorithms in Large-Scale Land Cover Mapping within the Google Earth Engine

Authors: Amin Naboureh, Ainong Li, Jinhu Bian, Guangbin Lei, Hamid Ebrahimy

Abstract:

Large-scale land cover mapping has become a new challenge in land change and remote sensing field because of involving a big volume of data. Moreover, selecting the right classification method, especially when there are different types of landscapes in the study area is quite difficult. This paper is an attempt to compare the performance of different machine learning (ML) algorithms for generating a land cover map of the China-Central Asia–West Asia Corridor that is considered as one of the main parts of the Belt and Road Initiative project (BRI). The cloud-based Google Earth Engine (GEE) platform was used for generating a land cover map for the study area from Landsat-8 images (2017) by applying three frequently used ML algorithms including random forest (RF), support vector machine (SVM), and artificial neural network (ANN). The selected ML algorithms (RF, SVM, and ANN) were trained and tested using reference data obtained from MODIS yearly land cover product and very high-resolution satellite images. The finding of the study illustrated that among three frequently used ML algorithms, RF with 91% overall accuracy had the best result in producing a land cover map for the China-Central Asia–West Asia Corridor whereas ANN showed the worst result with 85% overall accuracy. The great performance of the GEE in applying different ML algorithms and handling huge volume of remotely sensed data in the present study showed that it could also help the researchers to generate reliable long-term land cover change maps. The finding of this research has great importance for decision-makers and BRI’s authorities in strategic land use planning.

Keywords: land cover, google earth engine, machine learning, remote sensing

Procedia PDF Downloads 113
226 A Computational Approach for the Prediction of Relevant Olfactory Receptors in Insects

Authors: Zaide Montes Ortiz, Jorge Alberto Molina, Alejandro Reyes

Abstract:

Insects are extremely successful organisms. A sophisticated olfactory system is in part responsible for their survival and reproduction. The detection of volatile organic compounds can positively or negatively affect many behaviors in insects. Compounds such as carbon dioxide (CO2), ammonium, indol, and lactic acid are essential for many species of mosquitoes like Anopheles gambiae in order to locate vertebrate hosts. For instance, in A. gambiae, the olfactory receptor AgOR2 is strongly activated by indol, which accounts for almost 30% of human sweat. On the other hand, in some insects of agricultural importance, the detection and identification of pheromone receptors (PRs) in lepidopteran species has become a promising field for integrated pest management. For example, with the disruption of the pheromone receptor, BmOR1, mediated by transcription activator-like effector nucleases (TALENs), the sensitivity to bombykol was completely removed affecting the pheromone-source searching behavior in male moths. Then, the detection and identification of olfactory receptors in the genomes of insects is fundamental to improve our understanding of the ecological interactions, and to provide alternatives in the integrated pests and vectors management. Hence, the objective of this study is to propose a bioinformatic workflow to enhance the detection and identification of potential olfactory receptors in genomes of relevant insects. Applying Hidden Markov models (Hmms) and different computational tools, potential candidates for pheromone receptors in Tuta absoluta were obtained, as well as potential carbon dioxide receptors in Rhodnius prolixus, the main vector of Chagas disease. This study showed the validity of a bioinformatic workflow with a potential to improve the identification of certain olfactory receptors in different orders of insects.

Keywords: bioinformatic workflow, insects, olfactory receptors, protein prediction

Procedia PDF Downloads 150
225 An Unsupervised Domain-Knowledge Discovery Framework for Fake News Detection

Authors: Yulan Wu

Abstract:

With the rapid development of social media, the issue of fake news has gained considerable prominence, drawing the attention of both the public and governments. The widespread dissemination of false information poses a tangible threat across multiple domains of society, including politics, economy, and health. However, much research has concentrated on supervised training models within specific domains, their effectiveness diminishes when applied to identify fake news across multiple domains. To solve this problem, some approaches based on domain labels have been proposed. By segmenting news to their specific area in advance, judges in the corresponding field may be more accurate on fake news. However, these approaches disregard the fact that news records can pertain to multiple domains, resulting in a significant loss of valuable information. In addition, the datasets used for training must all be domain-labeled, which creates unnecessary complexity. To solve these problems, an unsupervised domain knowledge discovery framework for fake news detection is proposed. Firstly, to effectively retain the multidomain knowledge of the text, a low-dimensional vector for each news text to capture domain embeddings is generated. Subsequently, a feature extraction module utilizing the unsupervisedly discovered domain embeddings is used to extract the comprehensive features of news. Finally, a classifier is employed to determine the authenticity of the news. To verify the proposed framework, a test is conducted on the existing widely used datasets, and the experimental results demonstrate that this method is able to improve the detection performance for fake news across multiple domains. Moreover, even in datasets that lack domain labels, this method can still effectively transfer domain knowledge, which can educe the time consumed by tagging without sacrificing the detection accuracy.

Keywords: fake news, deep learning, natural language processing, multiple domains

Procedia PDF Downloads 101
224 Solving LWE by Pregressive Pumps and Its Optimization

Authors: Leizhang Wang, Baocang Wang

Abstract:

General Sieve Kernel (G6K) is considered as currently the fastest algorithm for the shortest vector problem (SVP) and record holder of open SVP challenge. We study the lattice basis quality improvement effects of the Workout proposed in G6K, which is composed of a series of pumps to solve SVP. Firstly, we use a low-dimensional pump output basis to propose a predictor to predict the quality of high-dimensional Pumps output basis. Both theoretical analysis and experimental tests are performed to illustrate that it is more computationally expensive to solve the LWE problems by using a G6K default SVP solving strategy (Workout) than these lattice reduction algorithms (e.g. BKZ 2.0, Progressive BKZ, Pump, and Jump BKZ) with sieving as their SVP oracle. Secondly, the default Workout in G6K is optimized to achieve a stronger reduction and lower computational cost. Thirdly, we combine the optimized Workout and the Pump output basis quality predictor to further reduce the computational cost by optimizing LWE instances selection strategy. In fact, we can solve the TU LWE challenge (n = 65, q = 4225, = 0:005) 13.6 times faster than the G6K default Workout. Fourthly, we consider a combined two-stage (Preprocessing by BKZ- and a big Pump) LWE solving strategy. Both stages use dimension for free technology to give new theoretical security estimations of several LWE-based cryptographic schemes. The security estimations show that the securities of these schemes with the conservative Newhope’s core-SVP model are somewhat overestimated. In addition, in the case of LAC scheme, LWE instances selection strategy can be optimized to further improve the LWE-solving efficiency even by 15% and 57%. Finally, some experiments are implemented to examine the effects of our strategies on the Normal Form LWE problems, and the results demonstrate that the combined strategy is four times faster than that of Newhope.

Keywords: LWE, G6K, pump estimator, LWE instances selection strategy, dimension for free

Procedia PDF Downloads 60
223 Insights into Insect Vectors: Liberibacter Interactions

Authors: Murad Ghanim

Abstract:

The citrus greening disease, also known as Huanglongbing, caused by the phloem-limited bacterium Candidatus Liberibacter asiaticus (CLas) has resulted in tremendous losses and the death of millions of citrus trees worldwide. CLas is transmitted by the Asian citrus psyllid (ACP) Diaphorina citri. The closely-related bacterium Candidatus Liberibacter solanacearum (CLso), which is associated with vegetative disorders in carrots and the zebra chips disease in potatoes, is transmitted by other psyllid species including Bactericera trigonica in carrots and B. ckockerelli in potatoes. Chemical sprays are currently the prevailing method for managing these diseases for limiting psyllid populations; however, they are limited in their effectiveness. A promising approach to prevent the transmission of these pathogens is to interfere with the vector-pathogen interactions, but our understanding of these processes is very limited. CLas induces changes in the nuclear architecture in the midgut of ACP and activates programmed cell death (apoptosis) in this organ. Strikingly, CLso displayed an opposite effect in the gut of B. trigonica, showing limited apoptosis, but widespread necrosis. Electron and fluorescent microscopy further showed that CLas induced the formation of Endoplasmic reticulum (ER) inclusion- and replication-like bodies, in which it increases and multiplies. ER involvement in bacterial replication is hypothesized to be the first stage of an immune response leading to the apoptotic and necrotic responses. ER exploitation and the subsequent events that lead to these cellular and stress responses might activate a cascade of molecular responses ending up with apoptosis and necrosis. Understanding the molecular interactions that underlay the necrotic/apoptotic responses to the bacteria will increase our knowledge of ACP-CLas, and BT-CLso interactions, and will set the foundation for developing novel, and efficient strategies to disturb these interactions and inhibit the transmission.

Keywords: Liberibacter, psyllid, transmission, apoptosis, necrosis

Procedia PDF Downloads 146
222 Phenotypic and Genotypic Expression of Hylomma Anatolicum Ticks Silenced for Ferritin Genes through RNA Interference Technology

Authors: Muhammad Sohail Sajid, Mahvish Maqbool, Hafiz Muhammad Rizwan, Muhammad Saqib, Haroon Ahmad

Abstract:

Ticks are blood-sucking ectoparasite that causes a decrease in production and economic losses and affects mammals, reptiles, and birds. Hyalomma anatolicum is the main vector for CCHF transmission and Pakistan has faced several outbreaks of CCHF in the recent past. Ferritin (fer)is a highly conserved molecule that is ubiquitous in most tick tissues and responsible for iron metabolism and storage. It was hypothesized that the development of acaricidal resistance and residual effects of commercially used acaricides could be controlled by using alternative control methods, including RNA interference. The current study aimed to evaluate the fer silencing effects on tick feeding, average body weight, egg mass index, and mortality. Ticks, collected through the standard collection protocols were further subjected to RNA isolation using the Trizol method. Commercially available kit procedures were followed for cDNA and dsRNA synthesis. The soaking/Immersion method was used for dsRNA delivery. Our findings have shown a 27% reduction in body weight of fer silenced group and showed a significant association of fer and body weight. Silencing of fer had a significant effect on the engorgement percentage (P= 0.0007), oviposition (P=0.008), egg mass (P= 0.004) and hatching (P= 0.001). The soaking method was used for dsRNA delivery and 15°C was found to be an optimum temperature for inducing gene silencing in ticks as at this temperature, maximum survivability after immersion was attained. This study along with previous studies, described that iron toxicity due to the silencing of fer could play an important role in the control of ticks and fer can be used as a potent candidate for vaccine development.

Keywords: ticks, iron, ferritin, engorgement, oviposition, immersion, RNA interference

Procedia PDF Downloads 97
221 An Assessment of Floodplain Vegetation Response to Groundwater Changes Using the Soil & Water Assessment Tool Hydrological Model, Geographic Information System, and Machine Learning in the Southeast Australian River Basin

Authors: Newton Muhury, Armando A. Apan, Tek N. Marasani, Gebiaw T. Ayele

Abstract:

The changing climate has degraded freshwater availability in Australia that influencing vegetation growth to a great extent. This study assessed the vegetation responses to groundwater using Terra’s moderate resolution imaging spectroradiometer (MODIS), Normalised Difference Vegetation Index (NDVI), and soil water content (SWC). A hydrological model, SWAT, has been set up in a southeast Australian river catchment for groundwater analysis. The model was calibrated and validated against monthly streamflow from 2001 to 2006 and 2007 to 2010, respectively. The SWAT simulated soil water content for 43 sub-basins and monthly MODIS NDVI data for three different types of vegetation (forest, shrub, and grass) were applied in the machine learning tool, Waikato Environment for Knowledge Analysis (WEKA), using two supervised machine learning algorithms, i.e., support vector machine (SVM) and random forest (RF). The assessment shows that different types of vegetation response and soil water content vary in the dry and wet seasons. The WEKA model generated high positive relationships (r = 0.76, 0.73, and 0.81) between NDVI values of all vegetation in the sub-basins against soil water content (SWC), the groundwater flow (GW), and the combination of these two variables, respectively, during the dry season. However, these responses were reduced by 36.8% (r = 0.48) and 13.6% (r = 0.63) against GW and SWC, respectively, in the wet season. Although the rainfall pattern is highly variable in the study area, the summer rainfall is very effective for the growth of the grass vegetation type. This study has enriched our knowledge of vegetation responses to groundwater in each season, which will facilitate better floodplain vegetation management.

Keywords: ArcSWAT, machine learning, floodplain vegetation, MODIS NDVI, groundwater

Procedia PDF Downloads 101
220 Classifying Affective States in Virtual Reality Environments Using Physiological Signals

Authors: Apostolos Kalatzis, Ashish Teotia, Vishnunarayan Girishan Prabhu, Laura Stanley

Abstract:

Emotions are functional behaviors influenced by thoughts, stimuli, and other factors that induce neurophysiological changes in the human body. Understanding and classifying emotions are challenging as individuals have varying perceptions of their environments. Therefore, it is crucial that there are publicly available databases and virtual reality (VR) based environments that have been scientifically validated for assessing emotional classification. This study utilized two commercially available VR applications (Guided Meditation VR™ and Richie’s Plank Experience™) to induce acute stress and calm state among participants. Subjective and objective measures were collected to create a validated multimodal dataset and classification scheme for affective state classification. Participants’ subjective measures included the use of the Self-Assessment Manikin, emotional cards and 9 point Visual Analogue Scale for perceived stress, collected using a Virtual Reality Assessment Tool developed by our team. Participants’ objective measures included Electrocardiogram and Respiration data that were collected from 25 participants (15 M, 10 F, Mean = 22.28  4.92). The features extracted from these data included heart rate variability components and respiration rate, both of which were used to train two machine learning models. Subjective responses validated the efficacy of the VR applications in eliciting the two desired affective states; for classifying the affective states, a logistic regression (LR) and a support vector machine (SVM) with a linear kernel algorithm were developed. The LR outperformed the SVM and achieved 93.8%, 96.2%, 93.8% leave one subject out cross-validation accuracy, precision and recall, respectively. The VR assessment tool and data collected in this study are publicly available for other researchers.

Keywords: affective computing, biosignals, machine learning, stress database

Procedia PDF Downloads 144
219 Modelling and Optimization of a Combined Sorption Enhanced Biomass Gasification with Hydrothermal Carbonization, Hot Gas Cleaning and Dielectric Barrier Discharge Plasma Reactor to Produce Pure H₂ and Methanol Synthesis

Authors: Vera Marcantonio, Marcello De Falco, Mauro Capocelli, Álvaro Amado-Fierro, Teresa A. Centeno, Enrico Bocci

Abstract:

Concerns about energy security, energy prices, and climate change led scientific research towards sustainable solutions to fossil fuel as renewable energy sources coupled with hydrogen as an energy vector and carbon capture and conversion technologies. Among the technologies investigated in the last decades, biomass gasification acquired great interest owing to the possibility of obtaining low-cost and CO₂ negative emission hydrogen production from a large variety of everywhere available organic wastes. Upstream and downstream treatment were then studied in order to maximize hydrogen yield, reduce the content of organic and inorganic contaminants under the admissible levels for the technologies which are coupled with, capture, and convert carbon dioxide. However, studies which analyse a whole process made of all those technologies are still missing. In order to fill this lack, the present paper investigated the coexistence of hydrothermal carbonization (HTC), sorption enhance gasification (SEG), hot gas cleaning (HGC), and CO₂ conversion by dielectric barrier discharge (DBD) plasma reactor for H₂ production from biomass waste by means of Aspen Plus software. The proposed model aimed to identify and optimise the performance of the plant by varying operating parameters (such as temperature, CaO/biomass ratio, separation efficiency, etc.). The carbon footprint of the global plant is 2.3 kg CO₂/kg H₂, lower than the latest limit value imposed by the European Commission to consider hydrogen as “clean”, that was set to 3 kg CO₂/kg H₂. The hydrogen yield referred to the whole plant is 250 gH₂/kgBIOMASS.

Keywords: biomass gasification, hydrogen, aspen plus, sorption enhance gasification

Procedia PDF Downloads 81
218 dynr.mi: An R Program for Multiple Imputation in Dynamic Modeling

Authors: Yanling Li, Linying Ji, Zita Oravecz, Timothy R. Brick, Michael D. Hunter, Sy-Miin Chow

Abstract:

Assessing several individuals intensively over time yields intensive longitudinal data (ILD). Even though ILD provide rich information, they also bring other data analytic challenges. One of these is the increased occurrence of missingness with increased study length, possibly under non-ignorable missingness scenarios. Multiple imputation (MI) handles missing data by creating several imputed data sets, and pooling the estimation results across imputed data sets to yield final estimates for inferential purposes. In this article, we introduce dynr.mi(), a function in the R package, Dynamic Modeling in R (dynr). The package dynr provides a suite of fast and accessible functions for estimating and visualizing the results from fitting linear and nonlinear dynamic systems models in discrete as well as continuous time. By integrating the estimation functions in dynr and the MI procedures available from the R package, Multivariate Imputation by Chained Equations (MICE), the dynr.mi() routine is designed to handle possibly non-ignorable missingness in the dependent variables and/or covariates in a user-specified dynamic systems model via MI, with convergence diagnostic check. We utilized dynr.mi() to examine, in the context of a vector autoregressive model, the relationships among individuals’ ambulatory physiological measures, and self-report affect valence and arousal. The results from MI were compared to those from listwise deletion of entries with missingness in the covariates. When we determined the number of iterations based on the convergence diagnostics available from dynr.mi(), differences in the statistical significance of the covariate parameters were observed between the listwise deletion and MI approaches. These results underscore the importance of considering diagnostic information in the implementation of MI procedures.

Keywords: dynamic modeling, missing data, mobility, multiple imputation

Procedia PDF Downloads 165
217 Applying Multiplicative Weight Update to Skin Cancer Classifiers

Authors: Animish Jain

Abstract:

This study deals with using Multiplicative Weight Update within artificial intelligence and machine learning to create models that can diagnose skin cancer using microscopic images of cancer samples. In this study, the multiplicative weight update method is used to take the predictions of multiple models to try and acquire more accurate results. Logistic Regression, Convolutional Neural Network (CNN), and Support Vector Machine Classifier (SVMC) models are employed within the Multiplicative Weight Update system. These models are trained on pictures of skin cancer from the ISIC-Archive, to look for patterns to label unseen scans as either benign or malignant. These models are utilized in a multiplicative weight update algorithm which takes into account the precision and accuracy of each model through each successive guess to apply weights to their guess. These guesses and weights are then analyzed together to try and obtain the correct predictions. The research hypothesis for this study stated that there would be a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The SVMC model had an accuracy of 77.88%. The CNN model had an accuracy of 85.30%. The Logistic Regression model had an accuracy of 79.09%. Using Multiplicative Weight Update, the algorithm received an accuracy of 72.27%. The final conclusion that was drawn was that there was a significant difference in the accuracy of the three models and the Multiplicative Weight Update system. The conclusion was made that using a CNN model would be the best option for this problem rather than a Multiplicative Weight Update system. This is due to the possibility that Multiplicative Weight Update is not effective in a binary setting where there are only two possible classifications. In a categorical setting with multiple classes and groupings, a Multiplicative Weight Update system might become more proficient as it takes into account the strengths of multiple different models to classify images into multiple categories rather than only two categories, as shown in this study. This experimentation and computer science project can help to create better algorithms and models for the future of artificial intelligence in the medical imaging field.

Keywords: artificial intelligence, machine learning, multiplicative weight update, skin cancer

Procedia PDF Downloads 80
216 Hedgerow Detection and Characterization Using Very High Spatial Resolution SAR DATA

Authors: Saeid Gharechelou, Stuart Green, Fiona Cawkwell

Abstract:

Hedgerow has an important role for a wide range of ecological habitats, landscape, agriculture management, carbon sequestration, wood production. Hedgerow detection accurately using satellite imagery is a challenging problem in remote sensing techniques, because in the special approach it is very similar to line object like a road, from a spectral viewpoint, a hedge is very similar to a forest. Remote sensors with very high spatial resolution (VHR) recently enable the automatic detection of hedges by the acquisition of images with enough spectral and spatial resolution. Indeed, recently VHR remote sensing data provided the opportunity to detect the hedgerow as line feature but still remain difficulties in monitoring the characterization in landscape scale. In this research is used the TerraSAR-x Spotlight and Staring mode with 3-5 m resolution in wet and dry season in the test site of Fermoy County, Ireland to detect the hedgerow by acquisition time of 2014-2015. Both dual polarization of Spotlight data in HH/VV is using for detection of hedgerow. The varied method of SAR image technique with try and error way by integration of classification algorithm like texture analysis, support vector machine, k-means and random forest are using to detect hedgerow and its characterization. We are applying the Shannon entropy (ShE) and backscattering analysis in single and double bounce in polarimetric analysis for processing the object-oriented classification and finally extracting the hedgerow network. The result still is in progress and need to apply the other method as well to find the best method in study area. Finally, this research is under way to ahead to get the best result and here just present the preliminary work that polarimetric image of TSX potentially can detect the hedgerow.

Keywords: TerraSAR-X, hedgerow detection, high resolution SAR image, dual polarization, polarimetric analysis

Procedia PDF Downloads 231