Search results for: graph-based feature filtering method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19917

Search results for: graph-based feature filtering method

19257 The Finite Element Method for Nonlinear Fredholm Integral Equation of the Second Kind

Authors: Melusi Khumalo, Anastacia Dlamini

Abstract:

In this paper, we consider a numerical solution for nonlinear Fredholm integral equations of the second kind. We work with uniform mesh and use the Lagrange polynomials together with the Galerkin finite element method, where the weight function is chosen in such a way that it takes the form of the approximate solution but with arbitrary coefficients. We implement the finite element method to the nonlinear Fredholm integral equations of the second kind. We consider the error analysis of the method. Furthermore, we look at a specific example to illustrate the implementation of the finite element method.

Keywords: finite element method, Galerkin approach, Fredholm integral equations, nonlinear integral equations

Procedia PDF Downloads 367
19256 Using Machine Learning to Classify Human Fetal Health and Analyze Feature Importance

Authors: Yash Bingi, Yiqiao Yin

Abstract:

Reduction of child mortality is an ongoing struggle and a commonly used factor in determining progress in the medical field. The under-5 mortality number is around 5 million around the world, with many of the deaths being preventable. In light of this issue, Cardiotocograms (CTGs) have emerged as a leading tool to determine fetal health. By using ultrasound pulses and reading the responses, CTGs help healthcare professionals assess the overall health of the fetus to determine the risk of child mortality. However, interpreting the results of the CTGs is time-consuming and inefficient, especially in underdeveloped areas where an expert obstetrician is hard to come by. Using a support vector machine (SVM) and oversampling, this paper proposed a model that classifies fetal health with an accuracy of 99.59%. To further explain the CTG measurements, an algorithm based on Randomized Input Sampling for Explanation ((RISE) of Black-box Models was created, called Feature Alteration for explanation of Black Box Models (FAB), and compared the findings to Shapley Additive Explanations (SHAP) and Local Interpretable Model Agnostic Explanations (LIME). This allows doctors and medical professionals to classify fetal health with high accuracy and determine which features were most influential in the process.

Keywords: machine learning, fetal health, gradient boosting, support vector machine, Shapley values, local interpretable model agnostic explanations

Procedia PDF Downloads 135
19255 Verbal Prefix Selection in Old Japanese: A Corpus-Based Study

Authors: Zixi You

Abstract:

There are a number of verbal prefixes in Old Japanese. However, the selection or the compatibility of verbs and verbal prefixes is among the least investigated topics on Old Japanese language. Unlike other types of prefixes, verbal prefixes in dictionaries are more often than not listed with very brief information such as ‘unknown meaning’ or ‘rhythmic function only’. To fill in a part of this knowledge gap, this paper presents an exhaustive investigation based on the newly developed ‘Oxford Corpus of Old Japanese’ (OCOJ), which included nearly all existing resource of Old Japanese language, with detailed linguistics information in TEI-XML tags. In this paper, we propose the possibility that the following three prefixes, i-, sa-, ta- (with ta- being considered as a variation of sa-), are relevant to split intransitivity in Old Japanese, with evidence that unergative verbs favor i- and that unergative verbs favor sa-(ta-). This might be undermined by the fact that transitives are also found to follow i-. However, with several manifestations of split intransitivity in Old Japanese discussed, the behavior of transitives in verbal prefix selection is no longer as surprising as it may seem to be when one look at the selection of verbal prefix in isolation. It is possible that there are one or more features that played essential roles in determining the selection of i-, and the attested transitive verbs happen to have these features. The data suggest that this feature is a sense of ‘change’ of location or state involved in the event donated by the verb, which is a feature of typical unaccusatives. This is further discussed in the ‘affectedness’ hierarchy. The presentation of this paper, which includes a brief demonstration of the OCOJ, is expected to be of the interest of both specialists and general audiences.

Keywords: old Japanese, split intransitivity, unaccusatives, unergatives, verbal prefix selection

Procedia PDF Downloads 403
19254 An Online 3D Modeling Method Based on a Lossless Compression Algorithm

Authors: Jiankang Wang, Hongyang Yu

Abstract:

This paper proposes a portable online 3D modeling method. The method first utilizes a depth camera to collect data and compresses the depth data using a frame-by-frame lossless data compression method. The color image is encoded using the H.264 encoding format. After the cloud obtains the color image and depth image, a 3D modeling method based on bundlefusion is used to complete the 3D modeling. The results of this study indicate that this method has the characteristics of portability, online, and high efficiency and has a wide range of application prospects.

Keywords: 3D reconstruction, bundlefusion, lossless compression, depth image

Procedia PDF Downloads 75
19253 Difference Expansion Based Reversible Data Hiding Scheme Using Edge Directions

Authors: Toshanlal Meenpal, Ankita Meenpal

Abstract:

A very important technique in reversible data hiding field is Difference expansion. Secret message as well as the cover image may be completely recovered without any distortion after data extraction process due to reversibility feature. In general, in any difference expansion scheme embedding is performed by integer transform in the difference image acquired by grouping two neighboring pixel values. This paper proposes an improved reversible difference expansion embedding scheme. We mainly consider edge direction for embedding by modifying the difference of two neighboring pixels values. In general, the larger difference tends to bring a degraded stego image quality than the smaller difference. Image quality in the range of 0.5 to 3.7 dB in average is achieved by the proposed scheme, which is shown through the experimental results. However payload wise it achieves almost similar capacity in comparisons with previous method.

Keywords: information hiding, wedge direction, difference expansion, integer transform

Procedia PDF Downloads 473
19252 A Method for Modeling Flexible Manipulators: Transfer Matrix Method with Finite Segments

Authors: Haijie Li, Xuping Zhang

Abstract:

This paper presents a computationally efficient method for the modeling of robot manipulators with flexible links and joints. This approach combines the Discrete Time Transfer Matrix Method with the Finite Segment Method, in which the flexible links are discretized by a number of rigid segments connected by torsion springs; and the flexibility of joints are modeled by torsion springs. The proposed method avoids the global dynamics and has the advantage of modeling non-uniform manipulators. Experiments and simulations of a single-link flexible manipulator are conducted for verifying the proposed methodologies. The simulations of a three-link robot arm with links and joints flexibility are also performed.

Keywords: flexible manipulator, transfer matrix method, linearization, finite segment method

Procedia PDF Downloads 421
19251 Features of Normative and Pathological Realizations of Sibilant Sounds for Computer-Aided Pronunciation Evaluation in Children

Authors: Zuzanna Miodonska, Michal Krecichwost, Pawel Badura

Abstract:

Sigmatism (lisping) is a speech disorder in which sibilant consonants are mispronounced. The diagnosis of this phenomenon is usually based on the auditory assessment. However, the progress in speech analysis techniques creates a possibility of developing computer-aided sigmatism diagnosis tools. The aim of the study is to statistically verify whether specific acoustic features of sibilant sounds may be related to pronunciation correctness. Such knowledge can be of great importance while implementing classifiers and designing novel tools for automatic sibilants pronunciation evaluation. The study covers analysis of various speech signal measures, including features proposed in the literature for the description of normative sibilants realization. Amplitudes and frequencies of three fricative formants (FF) are extracted based on local spectral maxima of the friction noise. Skewness, kurtosis, four normalized spectral moments (SM) and 13 mel-frequency cepstral coefficients (MFCC) with their 1st and 2nd derivatives (13 Delta and 13 Delta-Delta MFCC) are included in the analysis as well. The resulting feature vector contains 51 measures. The experiments are performed on the speech corpus containing words with selected sibilant sounds (/ʃ, ʒ/) pronounced by 60 preschool children with proper pronunciation or with natural pathologies. In total, 224 /ʃ/ segments and 191 /ʒ/ segments are employed in the study. The Mann-Whitney U test is employed for the analysis of stigmatism and normative pronunciation. Statistically, significant differences are obtained in most of the proposed features in children divided into these two groups at p < 0.05. All spectral moments and fricative formants appear to be distinctive between pathology and proper pronunciation. These metrics describe the friction noise characteristic for sibilants, which makes them particularly promising for the use in sibilants evaluation tools. Correspondences found between phoneme feature values and an expert evaluation of the pronunciation correctness encourage to involve speech analysis tools in diagnosis and therapy of sigmatism. Proposed feature extraction methods could be used in a computer-assisted stigmatism diagnosis or therapy systems.

Keywords: computer-aided pronunciation evaluation, sigmatism diagnosis, speech signal analysis, statistical verification

Procedia PDF Downloads 293
19250 Virulence Phenotypes Among Multi-Drug Resistant Uropathogenic Bacteria

Authors: V. V. Lakshmi, Y. V. S. Annapurna

Abstract:

Urinary tract infection (UTI) is one of the most common infectious diseases seen in the community. Susceptible individuals experience multiple episodes, and progress to acute pyelonephritis or uro-sepsis or develop asymptomatic bacteriuria (ABU). Ability to cause extraintestinal infections depends on several virulence factors required for survival at extraintestinal sites. Presence of virulence phenotypes enhances the pathogenicity of these otherwise commensal organisms and thus augments its ability to cause extraintestinal infections, the most frequent in urinary tract infections(UTI). The present study focuses on detection of the virulence characters exhibited by the uropathogenic organism and most common factors exhibited in the local pathogens. A total of 700 isolates of E.coli and Klebsiella spp were included in the study. These were isolated from patients from local hospitals reported to be suffering with UTI over a period of three years. Isolation and identification was done based on Gram character and IMVIC reactions. Antibiotic sensitivity profile was carried out by disc diffusion method and multi drug resistant strains with MAR index of 0.7 were further selected.. Virulence features examined included their ability to produce exopolysaccharides, protease- gelatinase production, hemolysin production, haemagglutination and hydrophobicity test. Exopolysaccharide production was most predominant virulence feature among the isolates when checked by congo red method. The biofilms production examined by microtitre plates using ELISA reader confirmed that this is the major factor contributing to virulencity of the pathogens followed by hemolysin production

Keywords: Escherichia coli, Klebsiella sp, Uropathogens, Virulence features.

Procedia PDF Downloads 415
19249 Dynamic Response Analysis of Structure with Random Parameters

Authors: Ahmed Guerine, Ali El Hafidi, Bruno Martin, Philippe Leclaire

Abstract:

In this paper, we propose a method for the dynamic response of multi-storey structures with uncertain-but-bounded parameters. The effectiveness of the proposed method is demonstrated by a numerical example of three-storey structures. This equation is integrated numerically using Newmark’s method. The numerical results are obtained by the proposed method. The simulation accounting the interval analysis method results are compared with a probabilistic approach results. The interval analysis method provides a mean curve that is between an upper and lower bound obtained from the probabilistic approach.

Keywords: multi-storey structure, dynamic response, interval analysis method, random parameters

Procedia PDF Downloads 180
19248 From Shallow Semantic Representation to Deeper One: Verb Decomposition Approach

Authors: Aliaksandr Huminski

Abstract:

Semantic Role Labeling (SRL) as shallow semantic parsing approach includes recognition and labeling arguments of a verb in a sentence. Verb participants are linked with specific semantic roles (Agent, Patient, Instrument, Location, etc.). Thus, SRL can answer on key questions such as ‘Who’, ‘When’, ‘What’, ‘Where’ in a text and it is widely applied in dialog systems, question-answering, named entity recognition, information retrieval, and other fields of NLP. However, SRL has the following flaw: Two sentences with identical (or almost identical) meaning can have different semantic role structures. Let consider 2 sentences: (1) John put butter on the bread. (2) John buttered the bread. SRL for (1) and (2) will be significantly different. For the verb put in (1) it is [Agent + Patient + Goal], but for the verb butter in (2) it is [Agent + Goal]. It happens because of one of the most interesting and intriguing features of a verb: Its ability to capture participants as in the case of the verb butter, or their features as, say, in the case of the verb drink where the participant’s feature being liquid is shared with the verb. This capture looks like a total fusion of meaning and cannot be decomposed in direct way (in comparison with compound verbs like babysit or breastfeed). From this perspective, SRL looks really shallow to represent semantic structure. If the key point in semantic representation is an opportunity to use it for making inferences and finding hidden reasons, it assumes by default that two different but semantically identical sentences must have the same semantic structure. Otherwise we will have different inferences from the same meaning. To overcome the above-mentioned flaw, the following approach is suggested. Assume that: P is a participant of relation; F is a feature of a participant; Vcp is a verb that captures a participant; Vcf is a verb that captures a feature of a participant; Vpr is a primitive verb or a verb that does not capture any participant and represents only a relation. In another word, a primitive verb is a verb whose meaning does not include meanings from its surroundings. Then Vcp and Vcf can be decomposed as: Vcp = Vpr +P; Vcf = Vpr +F. If all Vcp and Vcf will be represented this way, then primitive verbs Vpr can be considered as a canonical form for SRL. As a result of that, there will be no hidden participants caught by a verb since all participants will be explicitly unfolded. An obvious example of Vpr is the verb go, which represents pure movement. In this case the verb drink can be represented as man-made movement of liquid into specific direction. Extraction and using primitive verbs for SRL create a canonical representation unique for semantically identical sentences. It leads to the unification of semantic representation. In this case, the critical flaw related to SRL will be resolved.

Keywords: decomposition, labeling, primitive verbs, semantic roles

Procedia PDF Downloads 356
19247 Virulence Phenotypes among Multi Drug Resistant Uropathogenic E. Coli and Klebsiella SPP

Authors: V. V. Lakshmi, Y. V. S. Annapurna

Abstract:

Urinary tract infection (UTI) is one of the most common infectious diseases seen in the community. Susceptible individuals experience multiple episodes, and progress to acute pyelonephritis or uro-sepsis or develop asymptomatic bacteriuria (ABU). Ability to cause extraintestinal infections depends on several virulence factors required for survival at extraintestinal sites. Presence of virulence phenotypes enhances the pathogenicity of these otherwise commensal organisms and thus augments its ability to cause extraintestinal infections, the most frequent in urinary tract infections(UTI). The present study focuses on detection of the virulence characters exhibited by the uropathogenic organism and most common factors exhibited in the local pathogens. A total of 700 isolates of E.coli and Klebsiella spp were included in the study.These were isolated from patients from local hospitals reported to be suffering with UTI over a period of three years. Isolation and identification was done based on Gram character and IMVIC reactions. Antibiotic sensitivity profile was carried out by disc diffusion method and multi drug resistant strains with MAR index of 0.7 were further selected. Virulence features examined included their ability to produce exopolysaccharides, protease- gelatinase production, hemolysin production, haemagglutination and hydrophobicity test. Exopolysaccharide production was most predominant virulence feature among the isolates when checked by congo red method. The biofilms production examined by microtitre plates using ELISA reader confirmed that this is the major factor contributing to virulencity of the pathogens followed by hemolysin production.

Keywords: Escherichia coli, Klebsiella spp, Uropathogens, virulence features

Procedia PDF Downloads 309
19246 Numerical Solutions of Generalized Burger-Fisher Equation by Modified Variational Iteration Method

Authors: M. O. Olayiwola

Abstract:

Numerical solutions of the generalized Burger-Fisher are obtained using a Modified Variational Iteration Method (MVIM) with minimal computational efforts. The computed results with this technique have been compared with other results. The present method is seen to be a very reliable alternative method to some existing techniques for such nonlinear problems.

Keywords: burger-fisher, modified variational iteration method, lagrange multiplier, Taylor’s series, partial differential equation

Procedia PDF Downloads 425
19245 Spectral Domain Fast Multipole Method for Solving Integral Equations of One and Two Dimensional Wave Scattering

Authors: Mohammad Ahmad, Dayalan Kasilingam

Abstract:

In this paper, a spectral domain implementation of the fast multipole method is presented. It is shown that the aggregation, translation, and disaggregation stages of the fast multipole method (FMM) can be performed using the spectral domain (SD) analysis. The spectral domain fast multipole method (SD-FMM) has the advantage of eliminating the near field/far field classification used in conventional FMM formulation. The study focuses on the application of SD-FMM to one-dimensional (1D) and two-dimensional (2D) electric field integral equation (EFIE). The case of perfectly conducting strip, circular and square cylinders are numerically analyzed and compared with the results from the standard method of moments (MoM).

Keywords: electric field integral equation, fast multipole method, method of moments, wave scattering, spectral domain

Procedia PDF Downloads 400
19244 Analytical Method Development and Validation of Stability Indicating Rp - Hplc Method for Detrmination of Atorvastatin and Methylcobalamine

Authors: Alkaben Patel

Abstract:

The proposed RP-HPLC method is easy, rapid, economical, precise and accurate stability indicating RP-HPLC method for simultaneous estimation of Astorvastatin and Methylcobalamine in their combined dosage form has been developed.The separation was achieved by LC-20 AT C18(250mm*4.6mm*2.6mm)Colum and water (pH 3.5): methanol 70:30 as mobile phase, at a flow rate of 1ml/min. wavelength of this dosage form is 215nm.The drug is related to stress condition of hydrolysis, oxidation, photolysis and thermal degradation.

Keywords: RP- HPLC, atorvastatin, methylcobalamine, method, development, validation

Procedia PDF Downloads 326
19243 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics

Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere

Abstract:

Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciences

Keywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet

Procedia PDF Downloads 128
19242 Classroom Management Practices of Hotel, Restaurant, and Institution Management Instructors

Authors: Diana Ruth Caga-Anan

Abstract:

Classroom management is a critical skill but the styles are constantly evolving. It is constantly under pressure particularly in the college education level due to diversity in student profiles, modes of delivery, and marketization of higher education. This study sought to analyze the extent of implementation of classroom management practices (CMPs) of the college instructors of the Hotel, Restaurant, and Institution Management of a premier university in the Philippines. It was also determined if their length of teaching affects their classroom management style. A questionnaire with sixteen 'evidenced-based' CMPs grouped into five critical features of classroom management, adopted from the literature search of Simonsen et al. (2008), was administered to 4 instructor-respondents and to their 88 students. Weighted mean scores of each of the CMPs revealed that there were differences between the instructors’ self-scores and their students’ ratings on their implementation of CMPs. The critical feature of classroom management 'actively engage students in observable ways' got the highest mean score, corresponding to 'always' from the instructors’ self-rating and 'frequently' from their students’ ratings. However, 'use a continuum of strategies to respond to inappropriate behaviors' got the lowest scores from both the instructors and their students corresponding only to 'occasionally'. Analysis of variance showed that the only CMP affected by the length of teaching is the practice of 'prompting students to respond'. Based on the findings, some recommendations for the instructors to improve on the critical feature where they scored low are discussed and suggestions are included for future research.

Keywords: classroom management, CMPs, critical features, evidence-based classroom management practices

Procedia PDF Downloads 162
19241 ANOVA-Based Feature Selection and Machine Learning System for IoT Anomaly Detection

Authors: Muhammad Ali

Abstract:

Cyber-attacks and anomaly detection on the Internet of Things (IoT) infrastructure is emerging concern in the domain of data-driven intrusion. Rapidly increasing IoT risk is now making headlines around the world. denial of service, malicious control, data type probing, malicious operation, DDos, scan, spying, and wrong setup are attacks and anomalies that can affect an IoT system failure. Everyone talks about cyber security, connectivity, smart devices, and real-time data extraction. IoT devices expose a wide variety of new cyber security attack vectors in network traffic. For further than IoT development, and mainly for smart and IoT applications, there is a necessity for intelligent processing and analysis of data. So, our approach is too secure. We train several machine learning models that have been compared to accurately predicting attacks and anomalies on IoT systems, considering IoT applications, with ANOVA-based feature selection with fewer prediction models to evaluate network traffic to help prevent IoT devices. The machine learning (ML) algorithms that have been used here are KNN, SVM, NB, D.T., and R.F., with the most satisfactory test accuracy with fast detection. The evaluation of ML metrics includes precision, recall, F1 score, FPR, NPV, G.M., MCC, and AUC & ROC. The Random Forest algorithm achieved the best results with less prediction time, with an accuracy of 99.98%.

Keywords: machine learning, analysis of variance, Internet of Thing, network security, intrusion detection

Procedia PDF Downloads 109
19240 A Comparison of Bias Among Relaxed Divisor Methods Using 3 Bias Measurements

Authors: Sumachaya Harnsukworapanich, Tetsuo Ichimori

Abstract:

The apportionment method is used by many countries, to calculate the distribution of seats in political bodies. For example, this method is used in the United States (U.S.) to distribute house seats proportionally based on the population of the electoral district. Famous apportionment methods include the divisor methods called the Adams Method, Dean Method, Hill Method, Jefferson Method and Webster Method. Sometimes the results from the implementation of these divisor methods are unfair and include errors. Therefore, it is important to examine the optimization of this method by using a bias measurement to figure out precise and fair results. In this research we investigate the bias of divisor methods in the U.S. Houses of Representatives toward large and small states by applying the Stolarsky Mean Method. We compare the bias of the apportionment method by using two famous bias measurements: The Balinski and Young measurement and the Ernst measurement. Both measurements have a formula for large and small states. The Third measurement however, which was created by the researchers, did not factor in the element of large and small states into the formula. All three measurements are compared and the results show that our measurement produces similar results to the other two famous measurements.

Keywords: apportionment, bias, divisor, fair, measurement

Procedia PDF Downloads 360
19239 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method

Authors: Jurriaan Gillissen

Abstract:

This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.

Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence

Procedia PDF Downloads 213
19238 Assessing the Utility of Unmanned Aerial Vehicle-Borne Hyperspectral Image and Photogrammetry Derived 3D Data for Wetland Species Distribution Quick Mapping

Authors: Qiaosi Li, Frankie Kwan Kit Wong, Tung Fung

Abstract:

Lightweight unmanned aerial vehicle (UAV) loading with novel sensors offers a low cost approach for data acquisition in complex environment. This study established a framework for applying UAV system in complex environment quick mapping and assessed the performance of UAV-based hyperspectral image and digital surface model (DSM) derived from photogrammetric point clouds for 13 species classification in wetland area Mai Po Inner Deep Bay Ramsar Site, Hong Kong. The study area was part of shallow bay with flat terrain and the major species including reedbed and four mangroves: Kandelia obovata, Aegiceras corniculatum, Acrostichum auerum and Acanthus ilicifolius. Other species involved in various graminaceous plants, tarbor, shrub and invasive species Mikania micrantha. In particular, invasive species climbed up to the mangrove canopy caused damage and morphology change which might increase species distinguishing difficulty. Hyperspectral images were acquired by Headwall Nano sensor with spectral range from 400nm to 1000nm and 0.06m spatial resolution image. A sequence of multi-view RGB images was captured with 0.02m spatial resolution and 75% overlap. Hyperspectral image was corrected for radiative and geometric distortion while high resolution RGB images were matched to generate maximum dense point clouds. Furtherly, a 5 cm grid digital surface model (DSM) was derived from dense point clouds. Multiple feature reduction methods were compared to identify the efficient method and to explore the significant spectral bands in distinguishing different species. Examined methods including stepwise discriminant analysis (DA), support vector machine (SVM) and minimum noise fraction (MNF) transformation. Subsequently, spectral subsets composed of the first 20 most importance bands extracted by SVM, DA and MNF, and multi-source subsets adding extra DSM to 20 spectrum bands were served as input in maximum likelihood classifier (MLC) and SVM classifier to compare the classification result. Classification results showed that feature reduction methods from best to worst are MNF transformation, DA and SVM. MNF transformation accuracy was even higher than all bands input result. Selected bands frequently laid along the green peak, red edge and near infrared. Additionally, DA found that chlorophyll absorption red band and yellow band were also important for species classification. In terms of 3D data, DSM enhanced the discriminant capacity among low plants, arbor and mangrove. Meanwhile, DSM largely reduced misclassification due to the shadow effect and morphological variation of inter-species. In respect to classifier, nonparametric SVM outperformed than MLC for high dimension and multi-source data in this study. SVM classifier tended to produce higher overall accuracy and reduce scattered patches although it costs more time than MLC. The best result was obtained by combining MNF components and DSM in SVM classifier. This study offered a precision species distribution survey solution for inaccessible wetland area with low cost of time and labour. In addition, findings relevant to the positive effect of DSM as well as spectral feature identification indicated that the utility of UAV-borne hyperspectral and photogrammetry deriving 3D data is promising in further research on wetland species such as bio-parameters modelling and biological invasion monitoring.

Keywords: digital surface model (DSM), feature reduction, hyperspectral, photogrammetric point cloud, species mapping, unmanned aerial vehicle (UAV)

Procedia PDF Downloads 250
19237 Solution for Thick Plate Resting on Winkler Foundation by Symplectic Geometry Method

Authors: Mei-Jie Xu, Yang Zhong

Abstract:

Based on the symplectic geometry method, the theory of Hamilton system can be applied in the analysis of problem solved using the theory of elasticity and in the solution of elliptic partial differential equations. With this technique, this paper derives the theoretical solution for a thick rectangular plate with four free edges supported on a Winkler foundation by variable separation method. In this method, the governing equation of thick plate was first transformed into state equations in the Hamilton space. The theoretical solution of this problem was next obtained by applying the method of variable separation based on the Hamilton system. Compared with traditional theoretical solutions for rectangular plates, this method has the advantage of not having to assume the form of deflection functions in the solution process. Numerical examples are presented to verify the validity of the proposed solution method.

Keywords: symplectic geometry method, Winkler foundation, thick rectangular plate, variable separation method, Hamilton system

Procedia PDF Downloads 294
19236 Comparative Study of Soliton Collisions in Uniform and Nonuniform Magnetized Plasma

Authors: Renu Tomar, Hitendra K. Malik, Raj P. Dahiya

Abstract:

Similar to the sound waves in air, plasmas support the propagation of ion waves, which evolve into the solitary structures when the effect of non linearity and dispersion are balanced. The ion acoustic solitary waves have been investigated in details in homogeneous plasmas, inhomogeneous plasmas, and magnetized plasmas. The ion acoustic solitary waves are also found to reflect from a density gradient or boundary present in the plasma after propagating. Another interesting feature of the solitary waves is their collision. In the present work, we carry out analytical calculations for the head-on collision of solitary waves in a magnetized plasma which has dust grains in addition to the ions and electrons. For this, we employ Poincar´e-Lighthill-Kuo (PLK) method. To lowest nonlinear order, the problem of colliding solitary waves leads to KdV (modified KdV) equations and also yields the phase shifts that occur in the interaction. These calculations are accomplished for the uniform and nonuniform plasmas, and the results on the soliton properties are discussed in detail.

Keywords: inhomogeneous magnetized plasma, dust charging, soliton collisions, magnetized plasma

Procedia PDF Downloads 461
19235 Tree Species Classification Using Effective Features of Polarimetric SAR and Hyperspectral Images

Authors: Milad Vahidi, Mahmod R. Sahebi, Mehrnoosh Omati, Reza Mohammadi

Abstract:

Forest management organizations need information to perform their work effectively. Remote sensing is an effective method to acquire information from the Earth. Two datasets of remote sensing images were used to classify forested regions. Firstly, all of extractable features from hyperspectral and PolSAR images were extracted. The optical features were spectral indexes related to the chemical, water contents, structural indexes, effective bands and absorption features. Also, PolSAR features were the original data, target decomposition components, and SAR discriminators features. Secondly, the particle swarm optimization (PSO) and the genetic algorithms (GA) were applied to select optimization features. Furthermore, the support vector machine (SVM) classifier was used to classify the image. The results showed that the combination of PSO and SVM had higher overall accuracy than the other cases. This combination provided overall accuracy about 90.56%. The effective features were the spectral index, the bands in shortwave infrared (SWIR) and the visible ranges and certain PolSAR features.

Keywords: hyperspectral, PolSAR, feature selection, SVM

Procedia PDF Downloads 408
19234 Wolof Voice Response Recognition System: A Deep Learning Model for Wolof Audio Classification

Authors: Krishna Mohan Bathula, Fatou Bintou Loucoubar, FNU Kaleemunnisa, Christelle Scharff, Mark Anthony De Castro

Abstract:

Voice recognition algorithms such as automatic speech recognition and text-to-speech systems with African languages can play an important role in bridging the digital divide of Artificial Intelligence in Africa, contributing to the establishment of a fully inclusive information society. This paper proposes a Deep Learning model that can classify the user responses as inputs for an interactive voice response system. A dataset with Wolof language words ‘yes’ and ‘no’ is collected as audio recordings. A two stage Data Augmentation approach is adopted for enhancing the dataset size required by the deep neural network. Data preprocessing and feature engineering with Mel-Frequency Cepstral Coefficients are implemented. Convolutional Neural Networks (CNNs) have proven to be very powerful in image classification and are promising for audio processing when sounds are transformed into spectra. For performing voice response classification, the recordings are transformed into sound frequency feature spectra and then applied image classification methodology using a deep CNN model. The inference model of this trained and reusable Wolof voice response recognition system can be integrated with many applications associated with both web and mobile platforms.

Keywords: automatic speech recognition, interactive voice response, voice response recognition, wolof word classification

Procedia PDF Downloads 104
19233 Trajectory Generation Procedure for Unmanned Aerial Vehicles

Authors: Amor Jnifene, Cedric Cocaud

Abstract:

One of the most constraining problems facing the development of autonomous vehicles is the limitations of current technologies. Guidance and navigation controllers need to be faster and more robust. Communication data links need to be more reliable and secure. For an Unmanned Aerial Vehicles (UAV) to be useful, and fully autonomous, one important feature that needs to be an integral part of the navigation system is autonomous trajectory planning. The work discussed in this paper presents a method for on-line trajectory planning for UAV’s. This method takes into account various constraints of different types including specific vectors of approach close to target points, multiple objectives, and other constraints related to speed, altitude, and obstacle avoidance. The trajectory produced by the proposed method ensures a smooth transition between different segments, satisfies the minimum curvature imposed by the dynamics of the UAV, and finds the optimum velocity based on available atmospheric conditions. Given a set of objective points and waypoints a skeleton of the trajectory is constructed first by linking all waypoints with straight segments based on the order in which they are encountered in the path. Secondly, vectors of approach (VoA) are assigned to objective waypoints and their preceding transitional waypoint if any. Thirdly, the straight segments are replaced by 3D curvilinear trajectories taking into account the aircraft dynamics. In summary, this work presents a method for on-line 3D trajectory generation (TG) of Unmanned Aerial Vehicles (UAVs). The method takes as inputs a series of waypoints and an optional vector of approach for each of the waypoints. Using a dynamic model based on the performance equations of fixed wing aircrafts, the TG computes a set of 3D parametric curves establishing a course between every pair of waypoints, and assembling these sets of curves to construct a complete trajectory. The algorithm ensures geometric continuity at each connection point between two sets of curves. The geometry of the trajectory is optimized according to the dynamic characteristics of the aircraft such that the result translates into a series of dynamically feasible maneuvers. In summary, this work presents a method for on-line 3D trajectory generation (TG) of Unmanned Aerial Vehicles (UAVs). The method takes as inputs a series of waypoints and an optional vector of approach for each of the waypoints. Using a dynamic model based on the performance equations of fixed wing aircraft, the TG computes a set of 3D parametric curves establishing a course between every pair of waypoints, and assembling these sets of curves to construct a complete trajectory. The algorithm ensures geometric continuity at each connection point between two sets of curves. The geometry of the trajectory is optimized according to the dynamic characteristics of the aircraft such that the result translates into a series of dynamically feasible maneuvers.

Keywords: trajectory planning, unmanned autonomous air vehicle, vector of approach, waypoints

Procedia PDF Downloads 398
19232 The Application of Video Segmentation Methods for the Purpose of Action Detection in Videos

Authors: Nassima Noufail, Sara Bouhali

Abstract:

In this work, we develop a semi-supervised solution for the purpose of action detection in videos and propose an efficient algorithm for video segmentation. The approach is divided into video segmentation, feature extraction, and classification. In the first part, a video is segmented into clips, and we used the K-means algorithm for this segmentation; our goal is to find groups based on similarity in the video. The application of k-means clustering into all the frames is time-consuming; therefore, we started by the identification of transition frames where the scene in the video changes significantly, and then we applied K-means clustering into these transition frames. We used two image filters, the gaussian filter and the Laplacian of Gaussian. Each filter extracts a set of features from the frames. The Gaussian filter blurs the image and omits the higher frequencies, and the Laplacian of gaussian detects regions of rapid intensity changes; we then used this vector of filter responses as an input to our k-means algorithm. The output is a set of cluster centers. Each video frame pixel is then mapped to the nearest cluster center and painted with a corresponding color to form a visual map. The resulting visual map had similar pixels grouped. We then computed a cluster score indicating how clusters are near each other and plotted a signal representing frame number vs. clustering score. Our hypothesis was that the evolution of the signal would not change if semantically related events were happening in the scene. We marked the breakpoints at which the root mean square level of the signal changes significantly, and each breakpoint is an indication of the beginning of a new video segment. In the second part, for each segment from part 1, we randomly selected a 16-frame clip, then we extracted spatiotemporal features using convolutional 3D network C3D for every 16 frames using a pre-trained model. The C3D final output is a 512-feature vector dimension; hence we used principal component analysis (PCA) for dimensionality reduction. The final part is the classification. The C3D feature vectors are used as input to a multi-class linear support vector machine (SVM) for the training model, and we used a multi-classifier to detect the action. We evaluated our experiment on the UCF101 dataset, which consists of 101 human action categories, and we achieved an accuracy that outperforms the state of art by 1.2%.

Keywords: video segmentation, action detection, classification, Kmeans, C3D

Procedia PDF Downloads 67
19231 Step Method for Solving Nonlinear Two Delays Differential Equation in Parkinson’s Disease

Authors: H. N. Agiza, M. A. Sohaly, M. A. Elfouly

Abstract:

Parkinson's disease (PD) is a heterogeneous disorder with common age of onset, symptoms, and progression levels. In this paper we will solve analytically the PD model as a non-linear delay differential equation using the steps method. The step method transforms a system of delay differential equations (DDEs) into systems of ordinary differential equations (ODEs). On some numerical examples, the analytical solution will be difficult. So we will approximate the analytical solution using Picard method and Taylor method to ODEs.

Keywords: Parkinson's disease, step method, delay differential equation, two delays

Procedia PDF Downloads 195
19230 2D and 3D Unsteady Simulation of the Heat Transfer in the Sample during Heat Treatment by Moving Heat Source

Authors: Zdeněk Veselý, Milan Honner, Jiří Mach

Abstract:

The aim of the performed work is to establish the 2D and 3D model of direct unsteady task of sample heat treatment by moving source employing computer model on the basis of finite element method. The complex boundary condition on heat loaded sample surface is the essential feature of the task. Computer model describes heat treatment of the sample during heat source movement over the sample surface. It is started from the 2D task of sample cross section as a basic model. Possibilities of extension from 2D to 3D task are discussed. The effect of the addition of third model dimension on the temperature distribution in the sample is showed. Comparison of various model parameters on the sample temperatures is observed. Influence of heat source motion on the depth of material heat treatment is shown for several velocities of the movement. Presented computer model is prepared for the utilization in laser treatment of machine parts.

Keywords: computer simulation, unsteady model, heat treatment, complex boundary condition, moving heat source

Procedia PDF Downloads 383
19229 A Superposition Method in Analyses of Clamped Thick Plates

Authors: Alexander Matrosov, Guriy Shirunov

Abstract:

A superposition method based on Lame's idea is used to get a general analytical solution to analyze a stress and strain state of a rectangular isotropjc elastic thick plate. The solution is built by using three solutions of the method of initial functions in the form of double trigonometric series. The results of bending of a thick plate under normal stress on its top face with two opposite sides clamped while others free of load are presented and compared with FEM modelling.

Keywords: general solution, method of initial functions, superposition method, thick isotropic plates

Procedia PDF Downloads 584
19228 Solution of Hybrid Fuzzy Differential Equations

Authors: Mahmood Otadi, Maryam Mosleh

Abstract:

The hybrid differential equations have a wide range of applications in science and engineering. In this paper, the homotopy analysis method (HAM) is applied to obtain the series solution of the hybrid differential equations. Using the homotopy analysis method, it is possible to find the exact solution or an approximate solution of the problem. Comparisons are made between improved predictor-corrector method, homotopy analysis method and the exact solution. Finally, we illustrate our approach by some numerical example.

Keywords: fuzzy number, fuzzy ODE, HAM, approximate method

Procedia PDF Downloads 503