Search results for: Bayesian inference
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 567

Search results for: Bayesian inference

177 A Unified Deep Framework for Joint 3d Pose Estimation and Action Recognition from a Single Color Camera

Authors: Huy Hieu Pham, Houssam Salmane, Louahdi Khoudour, Alain Crouzil, Pablo Zegers, Sergio Velastin

Abstract:

We present a deep learning-based multitask framework for joint 3D human pose estimation and action recognition from color video sequences. Our approach proceeds along two stages. In the first, we run a real-time 2D pose detector to determine the precise pixel location of important key points of the body. A two-stream neural network is then designed and trained to map detected 2D keypoints into 3D poses. In the second, we deploy the Efficient Neural Architecture Search (ENAS) algorithm to find an optimal network architecture that is used for modeling the Spatio-temporal evolution of the estimated 3D poses via an image-based intermediate representation and performing action recognition. Experiments on Human3.6M, Microsoft Research Redmond (MSR) Action3D, and Stony Brook University (SBU) Kinect Interaction datasets verify the effectiveness of the proposed method on the targeted tasks. Moreover, we show that our method requires a low computational budget for training and inference.

Keywords: human action recognition, pose estimation, D-CNN, deep learning

Procedia PDF Downloads 146
176 Explanation and Temporality in International Relations

Authors: Alasdair Stanton

Abstract:

What makes for a good explanation? Twenty years after Wendt’s important treatment of constitution and causation, non-causal explanations (sometimes referred to as ‘understanding’, or ‘descriptive inference’) have become, if not mainstream, at least accepted within International Relations. This article proceeds in two parts: firstly, it examines closely Wendt’s constitutional claims, and while it agrees there is a difference between causal and constitutional, rejects the view that constitutional explanations lack temporality. In fact, this author concludes that a constitutional argument is only possible if it relies upon a more foundational, causal argument. Secondly, through theoretical analysis of the constitutional argument, this research seeks to delineate temporal and non-temporal ways of explaining within International Relations. This article concludes that while the constitutional explanation, like other logical arguments, including comparative, and counter-factual, are not truly non-causal explanations, they are not bound as tightly to the ‘real world’ as temporal arguments such as cause-effect, process tracing, or even interpretivist accounts. However, like mathematical models, non-temporal arguments should aim for empirical testability as well as internal consistency. This work aims to give clear theoretical grounding to those authors using non-temporal arguments, but also to encourage them, and their positivist critics, to engage in thoroughgoing empirical tests.

Keywords: causal explanation, constitutional understanding, empirical, temporality

Procedia PDF Downloads 195
175 Tracking Filtering Algorithm Based on ConvLSTM

Authors: Ailing Yang, Penghan Song, Aihua Cai

Abstract:

The nonlinear maneuvering target tracking problem is mainly a state estimation problem when the target motion model is uncertain. Traditional solutions include Kalman filtering based on Bayesian filtering framework and extended Kalman filtering. However, these methods need prior knowledge such as kinematics model and state system distribution, and their performance is poor in state estimation of nonprior complex dynamic systems. Therefore, in view of the problems existing in traditional algorithms, a convolution LSTM target state estimation (SAConvLSTM-SE) algorithm based on Self-Attention memory (SAM) is proposed to learn the historical motion state of the target and the error distribution information measured at the current time. The measured track point data of airborne radar are processed into data sets. After supervised training, the data-driven deep neural network based on SAConvLSTM can directly obtain the target state at the next moment. Through experiments on two different maneuvering targets, we find that the network has stronger robustness and better tracking accuracy than the existing tracking methods.

Keywords: maneuvering target, state estimation, Kalman filter, LSTM, self-attention

Procedia PDF Downloads 177
174 Performance Analysis of Permanent Magnet Synchronous Motor Using Direct Torque Control Based ANFIS Controller for Electric Vehicle

Authors: Marulasiddappa H. B., Pushparajesh Viswanathan

Abstract:

Day by day, the uses of internal combustion engines (ICE) are deteriorating because of pollution and less fuel availability. In the present scenario, the electric vehicle (EV) plays a major role in the place of an ICE vehicle. The performance of EVs can be improved by the proper selection of electric motors. Initially, EV preferred induction motors for traction purposes, but due to complexity in controlling induction motor, permanent magnet synchronous motor (PMSM) is replacing induction motor in EV due to its advantages. Direct torque control (DTC) is one of the known techniques for PMSM drive in EV to control the torque and speed. However, the presence of torque ripple is the main drawback of this technique. Many control strategies are followed to reduce the torque ripples in PMSM. In this paper, the adaptive neuro-fuzzy inference system (ANFIS) controller technique is proposed to reduce torque ripples and settling time. Here the performance parameters like torque, speed and settling time are compared between conventional proportional-integral (PI) controller with ANFIS controller.

Keywords: direct torque control, electric vehicle, torque ripple, PMSM

Procedia PDF Downloads 164
173 A Particle Filter-Based Data Assimilation Method for Discrete Event Simulation

Authors: Zhi Zhu, Boquan Zhang, Tian Jing, Jingjing Li, Tao Wang

Abstract:

Data assimilation is a model and data hybrid-driven method that dynamically fuses new observation data with a numerical model to iteratively approach the real system state. It is widely used in state prediction and parameter inference of continuous systems. Because of the discrete event system’s non-linearity and non-Gaussianity, traditional Kalman Filter based on linear and Gaussian assumptions cannot perform data assimilation for such systems, so particle filter has gradually become a technical approach for discrete event simulation data assimilation. Hence, we proposed a particle filter-based discrete event simulation data assimilation method and took the unmanned aerial vehicle (UAV) maintenance service system as a proof of concept to conduct simulation experiments. The experimental results showed that the filtered state data is closer to the real state of the system, which verifies the effectiveness of the proposed method. This research can provide a reference framework for the data assimilation process of other complex nonlinear systems, such as discrete-time and agent simulation.

Keywords: discrete event simulation, data assimilation, particle filter, model and data-driven

Procedia PDF Downloads 14
172 Decision Making, Reward Processing and Response Selection

Authors: Benmansour Nassima, Benmansour Souheyla

Abstract:

The appropriate integration of reward processing and decision making provided by the environment is vital for behavioural success and individuals’ well being in everyday life. Functional neurological investigation has already provided an inclusive image on affective and emotional (motivational) processing in the healthy human brain and has recently focused its interest also on the assessment of brain function in anxious and depressed individuals. This article offers an overview on the theoretical approaches that relate emotion and decision-making, and spotlights investigation with anxious or depressed individuals to reveal how emotions can interfere with decision-making. This research aims at incorporating the emotional structure based on response and stimulation with a Bayesian approach to decision-making in terms of probability and value processing. It seeks to show how studies of individuals with emotional dysfunctions bear out that alterations of decision-making can be considered in terms of altered probability and value subtraction. The utmost objective is to critically determine if the probabilistic representation of belief affords could be a critical approach to scrutinize alterations in probability and value representation in subjective with anxiety and depression, and draw round the general implications of this approach.

Keywords: decision-making, motivation, alteration, reward processing, response selection

Procedia PDF Downloads 477
171 Role of Cryptocurrency in Portfolio Diversification

Authors: Onur Arugaslan, Ajay Samant, Devrim Yaman

Abstract:

Financial advisors and investors seek new assets which could potentially increase portfolio returns and decrease portfolio risk. Cryptocurrencies represent a relatively new asset class which could serve in both these roles. There has been very little research done in the area of the risk/return tradeoff in a portfolio consisting of fixed income assets, stocks, and cryptocurrency. The objective of this study is a rigorous examination of this issue. The data used in the study are the monthly returns on 4-week US Treasury Bills, S&P Investment Grade Corporate Bond Index, Bitcoin and the S&P 500 Stock Index. The methodology used in the study is the application Modern Portfolio Theory to evaluate the risk-adjusted returns of portfolios with varying combinations of these assets, using Sharpe, Treynor and Jensen Indexes, as well as the Sortino and Modigliani measures. The results of the study would include the ranking of various investment portfolios based on their risk/return characteristics. The conclusions of the study would include objective empirical inference for investors who are interested in including cryptocurrency in their asset portfolios but are unsure of the risk/return implications.

Keywords: financial economics, portfolio diversification, fixed income securities, cryptocurrency, stock indexes

Procedia PDF Downloads 73
170 Temporary Autonomous Areas in Time and Space: Psytrance Rave Parties as an Expression Area of Altered States of Consciousness in Turkey

Authors: Ugur Cihat Sakarya

Abstract:

This research focuses on psychedelic trance music events in Turkey in the context of altered states of consciousness (ASC). The fieldwork that was conducted from 2018 to 2019 is the main source of the research. Participant observation method was followed in 15 selected events. To direct the musical experiences of participants, performances were also presented as a Dj. Ten of these events are open-air festivals. Five of them are indoor parties. The observations made during fieldwork and suitable answers for inference from the interviews with participants, artists, DJs, and volunteers were selected, compiled, and presented. In the result, findings showed that these activities are perceived as temporary autonomous areas by the participants both in time and space and that these activities are suitable areas for expressing themselves as a group (psyfamily) against mainstream culture. It has been observed that the elements that complement the altered states of consciousness in these events are music, visual arts, drug use, and desire to experience spiritual experiences. It is thought that this first academic study -about this topic in Turkey- will open a door for future researches.

Keywords: consciousness, psychedelic, psytrance, rave, Turkey

Procedia PDF Downloads 135
169 Alterations of Gut Microbiota and Its Metabolomics in Child with 6PPDQ, PBDE, PCB, and Metal (Loid) Exposure

Authors: Xia Huo

Abstract:

The composition and metabolites of the gut microbiota can be altered by environmental pollutants. However, the effect of co-exposure to multiple pollutants on the human gut microbiota has not been sufficiently studied. In this study, gut microorganisms and their metabolites were compared between 33 children from Guiyu and 34 children from Haojiang. The exposure level was assessed by estimating the daily intake (EDI) of polybrominated diphenyl ethers (PBDEs), polychlorinated biphenyls (PCBs), 6PPD-quinone (6PPDQ), and metal(loid)s in dust. Significant correlations were found between the EDIs of 6PPDQ, BDE28, PCB52, Ni, Cu, and both the alpha diversity index and specific metabolites in single-element models. The study found that the Bayesian kernel machine regression (BKMR) model showed a negative correlation between the EDIs of five pollutants (6PPDQ, BDE28, PCB52, Ni, and Cu) and the Chao 1 index, particularly beyond the 55th percentile. Furthermore, the EDIs of these five pollutants were positively correlated with the levels of the metabolite 2,4-diaminobutyric acid while negatively correlated with the levels of d-erythro-sphingosine and d-threitol. Our research suggests that exposure to 6PPDQ, BDE28, PCB52, Ni, and Cu in kindergarten dust is associated with alterations in the gut microbiota and its metabolites. These alterations may be associated with neurodevelopmental abnormalities in children.

Keywords: gut microbiota, 6PPDQ, PBDEs, PCBs, metal(loid)s, BKMR

Procedia PDF Downloads 56
168 Single Imputation for Audiograms

Authors: Sarah Beaver, Renee Bryce

Abstract:

Audiograms detect hearing impairment, but missing values pose problems. This work explores imputations in an attempt to improve accuracy. This work implements Linear Regression, Lasso, Linear Support Vector Regression, Bayesian Ridge, K Nearest Neighbors (KNN), and Random Forest machine learning techniques to impute audiogram frequencies ranging from 125Hz to 8000Hz. The data contains patients who had or were candidates for cochlear implants. Accuracy is compared across two different Nested Cross-Validation k values. Over 4000 audiograms were used from 800 unique patients. Additionally, training on data combines and compares left and right ear audiograms versus single ear side audiograms. The accuracy achieved using Root Mean Square Error (RMSE) values for the best models for Random Forest ranges from 4.74 to 6.37. The R\textsuperscript{2} values for the best models for Random Forest ranges from .91 to .96. The accuracy achieved using RMSE values for the best models for KNN ranges from 5.00 to 7.72. The R\textsuperscript{2} values for the best models for KNN ranges from .89 to .95. The best imputation models received R\textsuperscript{2} between .89 to .96 and RMSE values less than 8dB. We also show that the accuracy of classification predictive models performed better with our best imputation models versus constant imputations by a two percent increase.

Keywords: machine learning, audiograms, data imputations, single imputations

Procedia PDF Downloads 82
167 FPGA Implementation of a Marginalized Particle Filter for Delineation of P and T Waves of ECG Signal

Authors: Jugal Bhandari, K. Hari Priya

Abstract:

The ECG signal provides important clinical information which could be used to pretend the diseases related to heart. Accordingly, delineation of ECG signal is an important task. Whereas delineation of P and T waves is a complex task. This paper deals with the Study of ECG signal and analysis of signal by means of Verilog Design of efficient filters and MATLAB tool effectively. It includes generation and simulation of ECG signal, by means of real time ECG data, ECG signal filtering and processing by analysis of different algorithms and techniques. In this paper, we design a basic particle filter which generates a dynamic model depending on the present and past input samples and then produces the desired output. Afterwards, the output will be processed by MATLAB to get the actual shape and accurate values of the ranges of P-wave and T-wave of ECG signal. In this paper, Questasim is a tool of mentor graphics which is being used for simulation and functional verification. The same design is again verified using Xilinx ISE which will be also used for synthesis, mapping and bit file generation. Xilinx FPGA board will be used for implementation of system. The final results of FPGA shall be verified with ChipScope Pro where the output data can be observed.

Keywords: ECG, MATLAB, Bayesian filtering, particle filter, Verilog hardware descriptive language

Procedia PDF Downloads 367
166 Regional Flood-Duration-Frequency Models for Norway

Authors: Danielle M. Barna, Kolbjørn Engeland, Thordis Thorarinsdottir, Chong-Yu Xu

Abstract:

Design flood values give estimates of flood magnitude within a given return period and are essential to making adaptive decisions around land use planning, infrastructure design, and disaster mitigation. Often design flood values are needed at locations with insufficient data. Additionally, in hydrologic applications where flood retention is important (e.g., floodplain management and reservoir design), design flood values are required at different flood durations. A statistical approach to this problem is a development of a regression model for extremes where some of the parameters are dependent on flood duration in addition to being covariate-dependent. In hydrology, this is called a regional flood-duration-frequency (regional-QDF) model. Typically, the underlying statistical distribution is chosen to be the Generalized Extreme Value (GEV) distribution. However, as the support of the GEV distribution depends on both its parameters and the range of the data, special care must be taken with the development of the regional model. In particular, we find that the GEV is problematic when developing a GAMLSS-type analysis due to the difficulty of proposing a link function that is independent of the unknown parameters and the observed data. We discuss these challenges in the context of developing a regional QDF model for Norway.

Keywords: design flood values, bayesian statistics, regression modeling of extremes, extreme value analysis, GEV

Procedia PDF Downloads 72
165 A Theorem Related to Sample Moments and Two Types of Moment-Based Density Estimates

Authors: Serge B. Provost

Abstract:

Numerous statistical inference and modeling methodologies are based on sample moments rather than the actual observations. A result justifying the validity of this approach is introduced. More specifically, it will be established that given the first n moments of a sample of size n, one can recover the original n sample points. This implies that a sample of size n and its first associated n moments contain precisely the same amount of information. However, it is efficient to make use of a limited number of initial moments as most of the relevant distributional information is included in them. Two types of density estimation techniques that rely on such moments will be discussed. The first one expresses a density estimate as the product of a suitable base density and a polynomial adjustment whose coefficients are determined by equating the moments of the density estimate to the sample moments. The second one assumes that the derivative of the logarithm of a density function can be represented as a rational function. This gives rise to a system of linear equations involving sample moments, the density estimate is then obtained by solving a differential equation. Unlike kernel density estimation, these methodologies are ideally suited to model ‘big data’ as they only require a limited number of moments, irrespective of the sample size. What is more, they produce simple closed form expressions that are amenable to algebraic manipulations. They also turn out to be more accurate as will be shown in several illustrative examples.

Keywords: density estimation, log-density, polynomial adjustments, sample moments

Procedia PDF Downloads 165
164 Multivariate Control Chart to Determine Efficiency Measurements in Industrial Processes

Authors: J. J. Vargas, N. Prieto, L. A. Toro

Abstract:

Control charts are commonly used to monitor processes involving either variable or attribute of quality characteristics and determining the control limits as a critical task for quality engineers to improve the processes. Nonetheless, in some applications it is necessary to include an estimation of efficiency. In this paper, the ability to define the efficiency of an industrial process was added to a control chart by means of incorporating a data envelopment analysis (DEA) approach. In depth, a Bayesian estimation was performed to calculate the posterior probability distribution of parameters as means and variance and covariance matrix. This technique allows to analyse the data set without the need of using the hypothetical large sample implied in the problem and to be treated as an approximation to the finite sample distribution. A rejection simulation method was carried out to generate random variables from the parameter functions. Each resulting vector was used by stochastic DEA model during several cycles for establishing the distribution of each efficiency measures for each DMU (decision making units). A control limit was calculated with model obtained and if a condition of a low level efficiency of DMU is presented, system efficiency is out of control. In the efficiency calculated a global optimum was reached, which ensures model reliability.

Keywords: data envelopment analysis, DEA, Multivariate control chart, rejection simulation method

Procedia PDF Downloads 374
163 Intelligent System and Renewable Energy: A Farming Platform in Precision Agriculture

Authors: Ryan B. Escorial, Elmer A. Maravillas, Chris Jordan G. Aliac

Abstract:

This study presents a small-scale water pumping system utilizing a fuzzy logic inference system attached to a renewable energy source. The fuzzy logic controller was designed and simulated in MATLAB fuzzy logic toolbox to examine the properties and characteristics of the input and output variables. The result of the simulation was implemented in a microcontroller, together with sensors, modules, and photovoltaic cells. The study used a grand rapid variety of lettuce, organic substrates, and foliar for observation of the capability of the device to irrigate crops. Two plant boxes intended for manual and automated irrigation were prepared with each box having 48 heads of lettuce. The observation of the system took 22-31 days, which is one harvest period of the crop. Results showed a 22.55% increase in agricultural productivity compared to manual irrigation. Aside from reducing human effort, and time, the smart irrigation system could help lessen some of the shortcomings of manual irrigations. It could facilitate the economical utilization of water, reducing consumption by 25%. The use of renewable energy could also help farmers reduce the cost of production by minimizing the use of diesel and gasoline.

Keywords: fuzzy logic, intelligent system, precision agriculture, renewable energy

Procedia PDF Downloads 129
162 Progressive Type-I Interval Censoring with Binomial Removal-Estimation and Its Properties

Authors: Sonal Budhiraja, Biswabrata Pradhan

Abstract:

This work considers statistical inference based on progressive Type-I interval censored data with random removal. The scheme of progressive Type-I interval censoring with random removal can be described as follows. Suppose n identical items are placed on a test at time T0 = 0 under k pre-fixed inspection times at pre-specified times T1 < T2 < . . . < Tk, where Tk is the scheduled termination time of the experiment. At inspection time Ti, Ri of the remaining surviving units Si, are randomly removed from the experiment. The removal follows a binomial distribution with parameters Si and pi for i = 1, . . . , k, with pk = 1. In this censoring scheme, the number of failures in different inspection intervals and the number of randomly removed items at pre-specified inspection times are observed. Asymptotic properties of the maximum likelihood estimators (MLEs) are established under some regularity conditions. A β-content γ-level tolerance interval (TI) is determined for two parameters Weibull lifetime model using the asymptotic properties of MLEs. The minimum sample size required to achieve the desired β-content γ-level TI is determined. The performance of the MLEs and TI is studied via simulation.

Keywords: asymptotic normality, consistency, regularity conditions, simulation study, tolerance interval

Procedia PDF Downloads 250
161 Inference for Compound Truncated Poisson Lognormal Model with Application to Maximum Precipitation Data

Authors: M. Z. Raqab, Debasis Kundu, M. A. Meraou

Abstract:

In this paper, we have analyzed maximum precipitation data during a particular period of time obtained from different stations in the Global Historical Climatological Network of the USA. One important point to mention is that some stations are shut down on certain days for some reason or the other. Hence, the maximum values are recorded by excluding those readings. It is assumed that the number of stations that operate follows zero-truncated Poisson random variables, and the daily precipitation follows a lognormal random variable. We call this model a compound truncated Poisson lognormal model. The proposed model has three unknown parameters, and it can take a variety of shapes. The maximum likelihood estimators can be obtained quite conveniently using Expectation-Maximization (EM) algorithm. Approximate maximum likelihood estimators are also derived. The associated confidence intervals also can be obtained from the observed Fisher information matrix. Simulation results have been performed to check the performance of the EM algorithm, and it is observed that the EM algorithm works quite well in this case. When we analyze the precipitation data set using the proposed model, it is observed that the proposed model provides a better fit than some of the existing models.

Keywords: compound Poisson lognormal distribution, EM algorithm, maximum likelihood estimation, approximate maximum likelihood estimation, Fisher information, skew distribution

Procedia PDF Downloads 108
160 Subsurface Elastic Properties Determination for Site Characterization Using Seismic Refraction Tomography at the Pwalugu Dam Area

Authors: Van-Dycke Sarpong Asare, Vincent Adongo

Abstract:

Field measurement of subsurface seismic p-wave velocities was undertaken through seismic refraction tomography. The aim of this work is to obtain a model of the shallow subsurface material elastic properties relevant for geotechnical site characterization. The survey area is at Pwalugu in Northern Ghana, where a multipurpose dam, for electricity generation, irrigation, and potable water delivery, is being planned. A 24-channel seismograph and 24, 10 Hz electromagnetic geophones, deployed 5 m apart constituted the acquisition hardware. Eleven (2-D) seismic refraction profiles, nine of which ran almost perpendicular and two parallel to the White Volta at Pwalugu, were acquired. The refraction tomograms of the thirteen profiles revealed a subsurface model consisting of one minor and one major acoustic impedance boundaries – the top dry/loose sand and the variably weathered sandstone contact, and the overburden-sandstones bedrock contact respectively. The p-wave velocities and by inference, with a priori values of poison ratios, the s-wave velocities, assisted in characterizing the geotechnical conditions of the proposed site and also in evaluating the dynamic properties such as the maximum shear modulus, the bulk modulus, and the Young modulus.

Keywords: tomography, characterization, consolidated, Pwalugu and seismograph

Procedia PDF Downloads 129
159 Estimation of Transition and Emission Probabilities

Authors: Aakansha Gupta, Neha Vadnere, Tapasvi Soni, M. Anbarsi

Abstract:

Protein secondary structure prediction is one of the most important goals pursued by bioinformatics and theoretical chemistry; it is highly important in medicine and biotechnology. Some aspects of protein functions and genome analysis can be predicted by secondary structure prediction. This is used to help annotate sequences, classify proteins, identify domains, and recognize functional motifs. In this paper, we represent protein secondary structure as a mathematical model. To extract and predict the protein secondary structure from the primary structure, we require a set of parameters. Any constants appearing in the model are specified by these parameters, which also provide a mechanism for efficient and accurate use of data. To estimate these model parameters there are many algorithms out of which the most popular one is the EM algorithm or called the Expectation Maximization Algorithm. These model parameters are estimated with the use of protein datasets like RS126 by using the Bayesian Probabilistic method (data set being categorical). This paper can then be extended into comparing the efficiency of EM algorithm to the other algorithms for estimating the model parameters, which will in turn lead to an efficient component for the Protein Secondary Structure Prediction. Further this paper provides a scope to use these parameters for predicting secondary structure of proteins using machine learning techniques like neural networks and fuzzy logic. The ultimate objective will be to obtain greater accuracy better than the previously achieved.

Keywords: model parameters, expectation maximization algorithm, protein secondary structure prediction, bioinformatics

Procedia PDF Downloads 481
158 Effects of Applying Low-Dye Taping in Performing Double-Leg Squat on Electromyographic Activity of Lower Extremity Muscles for Collegiate Basketball Players with Excessive Foot Pronation

Authors: I. M. K. Ho, S. K. Y. Chan, K. H. P. Lam, G. M. W. Tong, N. C. Y. Yeung, J. T. C. Luk

Abstract:

Low-dye taping (LDT) is commonly used for treating foot problems, such as plantar fasciitis, and supporting foot arch for runners and non-athletes patients with pes planus. The potential negative impact of pronated feet leading to tibial and femoral internal rotation via the entire kinetic chain reaction was postulated and identified. The changed lower limb biomechanics potentially leading to poor activation of hip and knee stabilizers, such as gluteus maximus and medius, may associate with higher risk of knee injuries including patellofemoral pain syndrome and ligamentous sprain in many team sports players. It is therefore speculated that foot arch correction with LDT might enhance the use of gluteal muscles. The purpose of this study was to investigate the effect of applying LDT on surface electromyographic (sEMG) activity of superior gluteus maximus (SGMax), inferior gluteus maximus (IGMax), gluteus medius (GMed) and tibialis anterior (TA) during double-leg squat. 12 male collegiate basketball players (age: 21.72.5 years; body fat: 12.43.6%; navicular drop: 13.72.7mm) with at least three years regular basketball training experience participated in this study. Participants were excluded if they had recent history of lower limb injuries, over 16.6% body fat and lesser than 10mm drop in navicular drop (ND) test. Recruited subjects visited the laboratory once for the within-subject crossover study. Maximum voluntary isometric contraction (MVIC) tests on all selected muscles were performed in randomized order followed by sEMG test on double-leg squat during LDT and non-LDT conditions in counterbalanced order. SGMax, IGMax, GMed and TA activities during the entire 2-second concentric and 2-second eccentric phases were normalized and interpreted as %MVIC. The magnitude of the difference between taped and non-taped conditions of each muscle was further assessed via standardized effect90% confidence intervals (CI) with non-clinical magnitude-based inference. Paired samples T-test showed a significant decrease (4.71.4mm) in ND (95% CI: 3.8, 5.6; p < 0.05) while no significant difference was observed between taped and non-taped conditions in sEMG tests for all muscles and contractions (p > 0.05). On top of traditional significant testing, magnitude-based inference showed possibly increase in IGMax activity (small standardized effect: 0.270.44), likely increase in GMed activity (small standardized effect: 0.340.34) and possibly increase in TA activity (small standardized effect: 0.220.29) during eccentric phase. It is speculated that the decrease of navicular drop supported by LDT application could potentially enhance the use of inferior gluteus maximus and gluteus medius especially during eccentric phase in this study. As the eccentric phase of double-leg squat is an important component of landing activities in basketball, further studies on the onset and amount of gluteal activation during jumping and landing activities with LDT are recommended. Since both hip and knee kinematics were not measured in this study, the underlying cause of the observed increase in gluteal activation during squat after LDT is inconclusive. In this regard, the investigation of relationships between LDT application, ND, hip and knee kinematics, and gluteal muscle activity during sports specific jumping and landing tasks should be focused in the future.

Keywords: flat foot, gluteus maximus, gluteus medius, injury prevention

Procedia PDF Downloads 156
157 An Approach for Estimation in Hierarchical Clustered Data Applicable to Rare Diseases

Authors: Daniel C. Bonzo

Abstract:

Practical considerations lead to the use of unit of analysis within subjects, e.g., bleeding episodes or treatment-related adverse events, in rare disease settings. This is coupled with data augmentation techniques such as extrapolation to enlarge the subject base. In general, one can think about extrapolation of data as extending information and conclusions from one estimand to another estimand. This approach induces hierarchichal clustered data with varying cluster sizes. Extrapolation of clinical trial data is being accepted increasingly by regulatory agencies as a means of generating data in diverse situations during drug development process. Under certain circumstances, data can be extrapolated to a different population, a different but related indication, and different but similar product. We consider here the problem of estimation (point and interval) using a mixed-models approach under an extrapolation. It is proposed that estimators (point and interval) be constructed using weighting schemes for the clusters, e.g., equally weighted and with weights proportional to cluster size. Simulated data generated under varying scenarios are then used to evaluate the performance of this approach. In conclusion, the evaluation result showed that the approach is a useful means for improving statistical inference in rare disease settings and thus aids not only signal detection but risk-benefit evaluation as well.

Keywords: clustered data, estimand, extrapolation, mixed model

Procedia PDF Downloads 136
156 Reliability-based Condition Assessment of Offshore Wind Turbines using SHM data

Authors: Caglayan Hizal, Hasan Emre Demirci, Engin Aktas, Alper Sezer

Abstract:

Offshore wind turbines consist of a long slender tower with a heavy fixed mass on the top of the tower (nacelle), together with a heavy rotating mass (blades and hub). They are always subjected to environmental loads including wind and wave loads in their service life. This study presents a three-stage methodology for reliability-based condition assessment of offshore wind-turbines against the seismic, wave and wind induced effects considering the soil-structure interaction. In this context, failure criterions are considered as serviceability limits of a monopile supporting an Offshore Wind Turbine: (a) allowable horizontal displacement at pile head should not exceed 0.2 m, (b) rotations at pile head should not exceed 0.5°. A Bayesian system identification framework is adapted to the classical reliability analysis procedure. Using this framework, a reliability assessment can be directly implemented to the updated finite element model without performing time-consuming methods. For numerical verification, simulation data of the finite model of a real offshore wind-turbine structure is investigated using the three-stage methodology.

Keywords: Offshore wind turbines, SHM, reliability assessment, soil-structure interaction

Procedia PDF Downloads 532
155 FLIME - Fast Low Light Image Enhancement for Real-Time Video

Authors: Vinay P., Srinivas K. S.

Abstract:

Low Light Image Enhancement is of utmost impor- tance in computer vision based tasks. Applications include vision systems for autonomous driving, night vision devices for defence systems, low light object detection tasks. Many of the existing deep learning methods are resource intensive during the inference step and take considerable time for processing. The algorithm should take considerably less than 41 milliseconds in order to process a real-time video feed with 24 frames per second and should be even less for a video with 30 or 60 frames per second. The paper presents a fast and efficient solution which has two main advantages, it has the potential to be used for a real-time video feed, and it can be used in low compute environments because of the lightweight nature. The proposed solution is a pipeline of three steps, the first one is the use of a simple function to map input RGB values to output RGB values, the second is to balance the colors and the final step is to adjust the contrast of the image. Hence a custom dataset is carefully prepared using images taken in low and bright lighting conditions. The preparation of the dataset, the proposed model, the processing time are discussed in detail and the quality of the enhanced images using different methods is shown.

Keywords: low light image enhancement, real-time video, computer vision, machine learning

Procedia PDF Downloads 206
154 e-Learning Security: A Distributed Incident Response Generator

Authors: Bel G Raggad

Abstract:

An e-Learning setting is a distributed computing environment where information resources can be connected to any public network. Public networks are very unsecure which can compromise the reliability of an e-Learning environment. This study is only concerned with the intrusion detection aspect of e-Learning security and how incident responses are planned. The literature reported great advances in intrusion detection system (ids) but neglected to study an important ids weakness: suspected events are detected but an intrusion is not determined because it is not defined in ids databases. We propose an incident response generator (DIRG) that produces incident responses when the working ids system suspects an event that does not correspond to a known intrusion. Data involved in intrusion detection when ample uncertainty is present is often not suitable to formal statistical models including Bayesian. We instead adopt Dempster and Shafer theory to process intrusion data for the unknown event. The DIRG engine transforms data into a belief structure using incident scenarios deduced by the security administrator. Belief values associated with various incident scenarios are then derived and evaluated to choose the most appropriate scenario for which an automatic incident response is generated. This article provides a numerical example demonstrating the working of the DIRG system.

Keywords: decision support system, distributed computing, e-Learning security, incident response, intrusion detection, security risk, statefull inspection

Procedia PDF Downloads 437
153 Parameter Estimation for Contact Tracing in Graph-Based Models

Authors: Augustine Okolie, Johannes Müller, Mirjam Kretzchmar

Abstract:

We adopt a maximum-likelihood framework to estimate parameters of a stochastic susceptible-infected-recovered (SIR) model with contact tracing on a rooted random tree. Given the number of detectees per index case, our estimator allows to determine the degree distribution of the random tree as well as the tracing probability. Since we do not discover all infectees via contact tracing, this estimation is non-trivial. To keep things simple and stable, we develop an approximation suited for realistic situations (contract tracing probability small, or the probability for the detection of index cases small). In this approximation, the only epidemiological parameter entering the estimator is the basic reproduction number R0. The estimator is tested in a simulation study and applied to covid-19 contact tracing data from India. The simulation study underlines the efficiency of the method. For the empirical covid-19 data, we are able to compare different degree distributions and perform a sensitivity analysis. We find that particularly a power-law and a negative binomial degree distribution meet the data well and that the tracing probability is rather large. The sensitivity analysis shows no strong dependency on the reproduction number.

Keywords: stochastic SIR model on graph, contact tracing, branching process, parameter inference

Procedia PDF Downloads 78
152 Exploring the Activity Fabric of an Intelligent Environment with Hierarchical Hidden Markov Theory

Authors: Chiung-Hui Chen

Abstract:

The Internet of Things (IoT) was designed for widespread convenience. With the smart tag and the sensing network, a large quantity of dynamic information is immediately presented in the IoT. Through the internal communication and interaction, meaningful objects provide real-time services for users. Therefore, the service with appropriate decision-making has become an essential issue. Based on the science of human behavior, this study employed the environment model to record the time sequences and locations of different behaviors and adopted the probability module of the hierarchical Hidden Markov Model for the inference. The statistical analysis was conducted to achieve the following objectives: First, define user behaviors and predict the user behavior routes with the environment model to analyze user purposes. Second, construct the hierarchical Hidden Markov Model according to the logic framework, and establish the sequential intensity among behaviors to get acquainted with the use and activity fabric of the intelligent environment. Third, establish the intensity of the relation between the probability of objects’ being used and the objects. The indicator can describe the possible limitations of the mechanism. As the process is recorded in the information of the system created in this study, these data can be reused to adjust the procedure of intelligent design services.

Keywords: behavior, big data, hierarchical hidden Markov model, intelligent object

Procedia PDF Downloads 233
151 Maximum-likelihood Inference of Multi-Finger Movements Using Neural Activities

Authors: Kyung-Jin You, Kiwon Rhee, Marc H. Schieber, Nitish V. Thakor, Hyun-Chool Shin

Abstract:

It remains unknown whether M1 neurons encode multi-finger movements independently or as a certain neural network of single finger movements although multi-finger movements are physically a combination of single finger movements. We present an evidence of correlation between single and multi-finger movements and also attempt a challenging task of semi-blind decoding of neural data with minimum training of the neural decoder. Data were collected from 115 task-related neurons in M1 of a trained rhesus monkey performing flexion and extension of each finger and the wrist (12 single and 6 two-finger-movements). By exploiting correlation of temporal firing pattern between movements, we found that correlation coefficient for physically related movements pairs is greater than others; neurons tuned to single finger movements increased their firing rate when multi-finger commands were instructed. According to this knowledge, neural semi-blind decoding is done by choosing the greatest and the second greatest likelihood for canonical candidates. We achieved a decoding accuracy about 60% for multiple finger movement without corresponding training data set. this results suggest that only with the neural activities on single finger movements can be exploited to control dexterous multi-fingered neuroprosthetics.

Keywords: finger movement, neural activity, blind decoding, M1

Procedia PDF Downloads 321
150 The Reproducibility and Repeatability of Modified Likelihood Ratio for Forensics Handwriting Examination

Authors: O. Abiodun Adeyinka, B. Adeyemo Adesesan

Abstract:

The forensic use of handwriting depends on the analysis, comparison, and evaluation decisions made by forensic document examiners. When using biometric technology in forensic applications, it is necessary to compute Likelihood Ratio (LR) for quantifying strength of evidence under two competing hypotheses, namely the prosecution and the defense hypotheses wherein a set of assumptions and methods for a given data set will be made. It is therefore important to know how repeatable and reproducible our estimated LR is. This paper evaluated the accuracy and reproducibility of examiners' decisions. Confidence interval for the estimated LR were presented so as not get an incorrect estimate that will be used to deliver wrong judgment in the court of Law. The estimate of LR is fundamentally a Bayesian concept and we used two LR estimators, namely Logistic Regression (LoR) and Kernel Density Estimator (KDE) for this paper. The repeatability evaluation was carried out by retesting the initial experiment after an interval of six months to observe whether examiners would repeat their decisions for the estimated LR. The experimental results, which are based on handwriting dataset, show that LR has different confidence intervals which therefore implies that LR cannot be estimated with the same certainty everywhere. Though the LoR performed better than the KDE when tested using the same dataset, the two LR estimators investigated showed a consistent region in which LR value can be estimated confidently. These two findings advance our understanding of LR when used in computing the strength of evidence in handwriting using forensics.

Keywords: confidence interval, handwriting, kernel density estimator, KDE, logistic regression LoR, repeatability, reproducibility

Procedia PDF Downloads 124
149 Performance Comparison of Outlier Detection Techniques Based Classification in Wireless Sensor Networks

Authors: Ayadi Aya, Ghorbel Oussama, M. Obeid Abdulfattah, Abid Mohamed

Abstract:

Nowadays, many wireless sensor networks have been distributed in the real world to collect valuable raw sensed data. The challenge is to extract high-level knowledge from this huge amount of data. However, the identification of outliers can lead to the discovery of useful and meaningful knowledge. In the field of wireless sensor networks, an outlier is defined as a measurement that deviates from the normal behavior of sensed data. Many detection techniques of outliers in WSNs have been extensively studied in the past decade and have focused on classic based algorithms. These techniques identify outlier in the real transaction dataset. This survey aims at providing a structured and comprehensive overview of the existing researches on classification based outlier detection techniques as applicable to WSNs. Thus, we have identified key hypotheses, which are used by these approaches to differentiate between normal and outlier behavior. In addition, this paper tries to provide an easier and a succinct understanding of the classification based techniques. Furthermore, we identified the advantages and disadvantages of different classification based techniques and we presented a comparative guide with useful paradigms for promoting outliers detection research in various WSN applications and suggested further opportunities for future research.

Keywords: bayesian networks, classification-based approaches, KPCA, neural networks, one-class SVM, outlier detection, wireless sensor networks

Procedia PDF Downloads 496
148 A High Efficiency Reduced Rules Neuro-Fuzzy Based Maximum Power Point Tracking Controller for Photovoltaic Array Connected to Grid

Authors: Lotfi Farah, Nadir Farah, Zaiem Kamar

Abstract:

This paper achieves a maximum power point tracking (MPPT) controller using a high-efficiency reduced rules neuro-fuzzy inference system (HE2RNF) for a 100 kW stand-alone photovoltaic (PV) system connected to the grid. The suggested HE2RNF based MPPT seeks the optimal duty cycle for the boost DC-DC converter, making the designed PV system working at the maximum power point (MPP), then transferring this power to the grid via a three levels voltage source converter (VSC). PV current variation and voltage variation are chosen as HE2RNF-based MPPT controller inputs. By using these inputs with the duty cycle as the only single output, a six rules ANFIS is generated. The high performance of the proposed HE2RNF numerically in the MATLAB/Simulink environment is shown. The 0.006% steady-state error, 0.006s of tracking time, and 0.088s of starting time prove the robustness of this six reduced rules against the widely used twenty-five ones.

Keywords: PV, MPPT, ANFIS, HE2RNF-based MPPT controller, VSC, grid connection

Procedia PDF Downloads 183