Search results for: weighted overlay method
18877 Permanent Magnet Machine Can Be a Vibration Sensor for Itself
Authors: M. Barański
Abstract:
The article presents a new vibration diagnostic method designed to (PM) machines with permanent magnets. Those devices are commonly used in small wind and water systems or vehicles drives. The author’s method is very innovative and unique. Specific structural properties of PM machines are used in this method - electromotive force (EMF) generated due to vibrations. There was analysed number of publications which describe vibration diagnostic methods and tests of electrical PM machines and there was no method found to determine the technical condition of such machine basing on their own signals. In this article, the method genesis, the similarity of machines with permanent magnet to vibration sensor and simulation and laboratory tests results will be discussed. The method of determination the technical condition of electrical machine with permanent magnets basing on its own signals is the subject of patent application No P.405669, and it is the main thesis of author’s doctoral dissertation.Keywords: vibrations, generator, permanent magnet, traction drive, electrical vehicle
Procedia PDF Downloads 36718876 Methodology to Achieve Non-Cooperative Target Identification Using High Resolution Range Profiles
Authors: Olga Hernán-Vega, Patricia López-Rodríguez, David Escot-Bocanegra, Raúl Fernández-Recio, Ignacio Bravo
Abstract:
Non-Cooperative Target Identification has become a key research domain in the Defense industry since it provides the ability to recognize targets at long distance and under any weather condition. High Resolution Range Profiles, one-dimensional radar images where the reflectivity of a target is projected onto the radar line of sight, are widely used for identification of flying targets. According to that, to face this problem, an approach to Non-Cooperative Target Identification based on the exploitation of Singular Value Decomposition to a matrix of range profiles is presented. Target Identification based on one-dimensional radar images compares a collection of profiles of a given target, namely test set, with the profiles included in a pre-loaded database, namely training set. The classification is improved by using Singular Value Decomposition since it allows to model each aircraft as a subspace and to accomplish recognition in a transformed domain where the main features are easier to extract hence, reducing unwanted information such as noise. Singular Value Decomposition permits to define a signal subspace which contain the highest percentage of the energy, and a noise subspace which will be discarded. This way, only the valuable information of each target is used in the recognition process. The identification algorithm is based on finding the target that minimizes the angle between subspaces and takes place in a transformed domain. Two metrics, F1 and F2, based on Singular Value Decomposition are accomplished in the identification process. In the case of F2, the angle is weighted, since the top vectors set the importance in the contribution to the formation of a target signal, on the contrary F1 simply shows the evolution of the unweighted angle. In order to have a wide database or radar signatures and evaluate the performance, range profiles are obtained through numerical simulation of seven civil aircraft at defined trajectories taken from an actual measurement. Taking into account the nature of the datasets, the main drawback of using simulated profiles instead of actual measured profiles is that the former implies an ideal identification scenario, since measured profiles suffer from noise, clutter and other unwanted information and simulated profiles don't. In this case, the test and training samples have similar nature and usually a similar high signal-to-noise ratio, so as to assess the feasibility of the approach, the addition of noise has been considered before the creation of the test set. The identification results applying the unweighted and weighted metrics are analysed for demonstrating which algorithm provides the best robustness against noise in an actual possible scenario. So as to confirm the validity of the methodology, identification experiments of profiles coming from electromagnetic simulations are conducted, revealing promising results. Considering the dissimilarities between the test and training sets when noise is added, the recognition performance has been improved when weighting is applied. Future experiments with larger sets are expected to be conducted with the aim of finally using actual profiles as test sets in a real hostile situation.Keywords: HRRP, NCTI, simulated/synthetic database, SVD
Procedia PDF Downloads 35418875 Application of a Modified Crank-Nicolson Method in Metallurgy
Authors: Kobamelo Mashaba
Abstract:
The molten slag has a high substantial temperatures range between 1723-1923, carrying a huge amount of useful energy for reducing energy consumption and CO₂ emissions under the heat recovery process. Therefore in this study, we investigated the performance of the modified crank Nicolson method for a delayed partial differential equation on the heat recovery of molten slag in the metallurgical mining environment. It was proved that the proposed method converges quickly compared to the classic method with the existence of a unique solution. It was inferred from numerical result that the proposed methodology is more viable and profitable for the mining industry.Keywords: delayed partial differential equation, modified Crank-Nicolson Method, molten slag, heat recovery, parabolic equation
Procedia PDF Downloads 10218874 Correlates of Cost Effectiveness Analysis of Rating Scale and Psycho-Productive Multiple Choice Test for Assessing Students' Performance in Rice Production in Secondary Schools in Ebonyi State, Nigeria
Authors: Ogbonnaya Elom, Francis N. Azunku, Ogochukwu Onah
Abstract:
This study was carried out to determine the correlates of cost effectiveness analysis of rating scale and psycho-productive multiple choice test for assessing students’ performance in rice production. Four research questions were developed and answered, while one hypothesis was formulated and tested. Survey and correlation designs were adopted. The population of the study was 20,783 made up of 20,511 senior secondary (SSII) students and 272 teachers of agricultural science from 221 public secondary schools. Two schools with one intact class of 30 students each was purposely selected as sample based on certain criteria. Four sets of instruments were used for data collection. One of the instruments-the rating scale, was subjected to face and content validation while the other three were subjected to face validation only. Cronbach alpha technique was utilized to determine the internal consistency of the rating scale items which yielded a coefficient of 0.82 while the Kudder-Richardson (K-R 20) formula was involved in determining the stability of the psycho-productive multiple choice test items which yielded a coefficient of 0.80. Method of data collection involved a step-by-step approach in collecting data. Data collected were analyzed using percentage, weighted mean and sign test to answer the research questions while the hypothesis was tested using Spearman rank-order of correlation and t-test statistic. Findings of the study revealed among others, that psycho-productive multiple choice test is more effective than rating scale when the former is applied on the two groups of students. It was recommended among others, that the external examination bodies should integrate the use of psycho- productive multiple choice test into their examination policy and direct secondary schools to comply with it.Keywords: correlates, cost-effectiveness, psycho-productive multiple-choice scale, rating scale
Procedia PDF Downloads 14318873 Implicit Off-Grid Block Method for Solving Fourth and Fifth Order Ordinary Differential Equations Directly
Authors: Olusola Ezekiel Abolarin, Gift E. Noah
Abstract:
This research work considered an innovative procedure to numerically approximate higher-order Initial value problems (IVP) of ordinary differential equations (ODE) using the Legendre polynomial as the basis function. The proposed method is a half-step, self-starting Block integrator employed to approximate fourth and fifth order IVPs without reduction to lower order. The method was developed through a collocation and interpolation approach. The basic properties of the method, such as convergence, consistency and stability, were well investigated. Several test problems were considered, and the results compared favorably with both exact solutions and other existing methods.Keywords: initial value problem, ordinary differential equation, implicit off-grid block method, collocation, interpolation
Procedia PDF Downloads 8518872 First Order Reversal Curve Method for Characterization of Magnetic Nanostructures
Authors: Bashara Want
Abstract:
One of the key factors limiting the performance of magnetic memory is that the coercivity has a distribution with finite width, and the reversal starts at the weakest link in the distribution. So one must first know the distribution of coercivities in order to learn how to reduce the width of distribution and increase the coercivity field to obtain a system with narrow width. First Order Reversal Curve (FORC) method characterizes a system with hysteresis via the distribution of local coercivities and, in addition, the local interaction field. The method is more versatile than usual conventional major hysteresis loops that give only the statistical behaviour of the magnetic system. The FORC method will be presented and discussed at the conference.Keywords: magnetic materials, hysteresis, first-order reversal curve method, nanostructures
Procedia PDF Downloads 8218871 Inverse Scattering of Two-Dimensional Objects Using an Enhancement Method
Authors: A.R. Eskandari, M.R. Eskandari
Abstract:
A 2D complete identification algorithm for dielectric and multiple objects immersed in air is presented. The employed technique consists of initially retrieving the shape and position of the scattering object using a linear sampling method and then determining the electric permittivity and conductivity of the scatterer using adjoint sensitivity analysis. This inversion algorithm results in high computational speed and efficiency, and it can be generalized for any scatterer structure. Also, this method is robust with respect to noise. The numerical results clearly show that this hybrid approach provides accurate reconstructions of various objects.Keywords: inverse scattering, microwave imaging, two-dimensional objects, Linear Sampling Method (LSM)
Procedia PDF Downloads 38718870 A Packet Loss Probability Estimation Filter Using Most Recent Finite Traffic Measurements
Authors: Pyung Soo Kim, Eung Hyuk Lee, Mun Suck Jang
Abstract:
A packet loss probability (PLP) estimation filter with finite memory structure is proposed to estimate the packet rate mean and variance of the input traffic process in real-time while removing undesired system and measurement noises. The proposed PLP estimation filter is developed under a weighted least square criterion using only the finite traffic measurements on the most recent window. The proposed PLP estimation filter is shown to have several inherent properties such as unbiasedness, deadbeat, robustness. A guideline for choosing appropriate window length is described since it can affect significantly the estimation performance. Using computer simulations, the proposed PLP estimation filter is shown to be superior to the Kalman filter for the temporarily uncertain system. One possible explanation for this is that the proposed PLP estimation filter can have greater convergence time of a filtered estimate as the window length M decreases.Keywords: packet loss probability estimation, finite memory filter, infinite memory filter, Kalman filter
Procedia PDF Downloads 67418869 Landfill Site Selection Using Multi-Criteria Decision Analysis A Case Study for Gulshan-e-Iqbal Town, Karachi
Authors: Javeria Arain, Saad Malik
Abstract:
The management of solid waste is a crucial and essential aspect of urban environmental management especially in a city with an ever increasing population such as Karachi. The total amount of municipal solid waste generated from Gulshan e Iqbal town on average is 444.48 tons per day and landfill sites are a widely accepted solution for final disposal of this waste. However, an improperly selected site can have immense environmental, economical and ecological impacts. To select an appropriate landfill site a number of factors should be kept into consideration to minimize the potential hazards of solid waste. The purpose of this research is to analyse the study area for the construction of an appropriate landfill site for disposal of municipal solid waste generated from Gulshan e-Iqbal Town by using geospatial techniques considering hydrological, geological, social and geomorphological factors. This was achieved using analytical hierarchy process and fuzzy analysis as a decision support tool with integration of geographic information sciences techniques. Eight most critical parameters, relevant to the study area, were selected. After generation of thematic layers for each parameter, overlay analysis was performed in ArcGIS 10.0 software. The results produced by both methods were then compared with each other and the final suitability map using AHP shows that 19% of the total area is Least Suitable, 6% is Suitable but avoided, 46% is Moderately Suitable, 26% is Suitable, 2% is Most Suitable and 1% is Restricted. In comparison the output map of fuzzy set theory is not in crisp logic rather it provides an output map with a range of 0-1, where 0 indicates least suitable and 1 indicates most suitable site. Considering the results it is deduced that the northern part of the city is appropriate for constructing the landfill site though a final decision for an optimal site could be made after field survey and considering economical and political factors.Keywords: Analytical Hierarchy Process (AHP), fuzzy set theory, Geographic Information Sciences (GIS), Multi-Criteria Decision Analysis (MCDA)
Procedia PDF Downloads 50618868 A New Reliability Allocation Method Based on Fuzzy Numbers
Authors: Peng Li, Chuanri Li, Tao Li
Abstract:
Reliability allocation is quite important during early design and development stages for a system to apportion its specified reliability goal to subsystems. This paper improves the reliability fuzzy allocation method and gives concrete processes on determining the factor set, the factor weight set, judgment set, and multi-grade fuzzy comprehensive evaluation. To determine the weight of factor set, the modified trapezoidal numbers are proposed to reduce errors caused by subjective factors. To decrease the fuzziness in the fuzzy division, an approximation method based on linear programming is employed. To compute the explicit values of fuzzy numbers, centroid method of defuzzification is considered. An example is provided to illustrate the application of the proposed reliability allocation method based on fuzzy arithmetic.Keywords: reliability allocation, fuzzy arithmetic, allocation weight, linear programming
Procedia PDF Downloads 34418867 Comparative Study between Classical P-Q Method and Modern Fuzzy Controller Method to Improve the Power Quality of an Electrical Network
Authors: A. Morsli, A. Tlemçani, N. Ould Cherchali, M. S. Boucherit
Abstract:
This article presents two methods for the compensation of harmonics generated by a nonlinear load. The first is the classic method P-Q. The second is the controller by modern method of artificial intelligence specifically fuzzy logic. Both methods are applied to an Active Power Filter shunt (APFs) based on a three-phase voltage converter at five levels NPC topology. In calculating the harmonic currents of reference, we use the algorithm P-Q and pulse generation, we use the intersective PWM. For flexibility and dynamics, we use fuzzy logic. The results give us clear that the rate of Harmonic Distortion issued by fuzzy logic is better than P-Q.Keywords: fuzzy logic controller, P-Q method, pulse width modulation (PWM), shunt active power filter (sAPF), total harmonic distortion (THD)
Procedia PDF Downloads 54918866 Implicit Eulerian Fluid-Structure Interaction Method for the Modeling of Highly Deformable Elastic Membranes
Authors: Aymen Laadhari, Gábor Székely
Abstract:
This paper is concerned with the development of a fully implicit and purely Eulerian fluid-structure interaction method tailored for the modeling of the large deformations of elastic membranes in a surrounding Newtonian fluid. We consider a simplified model for the mechanical properties of the membrane, in which the surface strain energy depends on the membrane stretching. The fully Eulerian description is based on the advection of a modified surface tension tensor, and the deformations of the membrane are tracked using a level set strategy. The resulting nonlinear problem is solved by a Newton-Raphson method, featuring a quadratic convergence behavior. A monolithic solver is implemented, and we report several numerical experiments aimed at model validation and illustrating the accuracy of the presented method. We show that stability is maintained for significantly larger time steps.Keywords: finite element method, implicit, level set, membrane, Newton method
Procedia PDF Downloads 30418865 An Efficient Algorithm of Time Step Control for Error Correction Method
Authors: Youngji Lee, Yonghyeon Jeon, Sunyoung Bu, Philsu Kim
Abstract:
The aim of this paper is to construct an algorithm of time step control for the error correction method most recently developed by one of the authors for solving stiff initial value problems. It is achieved with the generalized Chebyshev polynomial and the corresponding error correction method. The main idea of the proposed scheme is in the usage of the duplicated node points in the generalized Chebyshev polynomials of two different degrees by adding necessary sample points instead of re-sampling all points. At each integration step, the proposed method is comprised of two equations for the solution and the error, respectively. The constructed algorithm controls both the error and the time step size simultaneously and possesses a good performance in the computational cost compared to the original method. Two stiff problems are numerically solved to assess the effectiveness of the proposed scheme.Keywords: stiff initial value problem, error correction method, generalized Chebyshev polynomial, node points
Procedia PDF Downloads 57418864 Backstepping Design and Fractional Differential Equation of Chaotic System
Authors: Ayub Khan, Net Ram Garg, Geeta Jain
Abstract:
In this paper, backstepping method is proposed to synchronize two fractional-order systems. The simulation results show that this method can effectively synchronize two chaotic systems.Keywords: backstepping method, fractional order, synchronization, chaotic system
Procedia PDF Downloads 45918863 Obtain the Stress Intensity Factor (SIF) in a Medium Containing a Penny-Shaped Crack by the Ritz Method
Authors: A. Tavangari, N. Salehzadeh
Abstract:
In the crack growth analysis, the Stress Intensity Factor (SIF) is a fundamental prerequisite. In the present study, the mode I stress intensity factor (SIF) of three-dimensional penny-Shaped crack is obtained in an isotropic elastic cylindrical medium with arbitrary dimensions under arbitrary loading at the top of the cylinder, by the semi-analytical method based on the Rayleigh-Ritz method. This method that is based on minimizing the potential energy amount of the whole of the system, gives a very close results to the previous studies. Defining the displacements (elastic fields) by hypothetical functions in a defined coordinate system is the base of this research. So for creating the singularity conditions at the tip of the crack the appropriate terms should be found.Keywords: penny-shaped crack, stress intensity factor, fracture mechanics, Ritz method
Procedia PDF Downloads 36618862 Passenger Flow Characteristics of Seoul Metropolitan Subway Network
Authors: Kang Won Lee, Jung Won Lee
Abstract:
Characterizing the network flow is of fundamental importance to understand the complex dynamics of networks. And passenger flow characteristics of the subway network are very relevant for an effective transportation management in urban cities. In this study, passenger flow of Seoul metropolitan subway network is investigated and characterized through statistical analysis. Traditional betweenness centrality measure considers only topological structure of the network and ignores the transportation factors. This paper proposes a weighted betweenness centrality measure that incorporates monthly passenger flow volume. We apply the proposed measure on the Seoul metropolitan subway network involving 493 stations and 16 lines. Several interesting insights about the network are derived from the new measures. Using Kolmogorov-Smirnov test, we also find out that monthly passenger flow between any two stations follows a power-law distribution and other traffic characteristics such as congestion level and throughflow traffic follow exponential distribution.Keywords: betweenness centrality, correlation coefficient, power-law distribution, Korea traffic DB
Procedia PDF Downloads 29118861 Degradation of Polycyclic Aromatic Hydrocarbons-Contaminated Soil by Proxy-Acid Method
Authors: Reza Samsami
Abstract:
The aim of the study was to degradation of polycyclic aromatic hydrocarbons (PAHs) by proxy-acid method. The amounts of PAHs were determined in a silty-clay soil sample of an aged oil refinery field in Abadan, Iran. Proxy-acid treatment method was investigated. The results have shown that the proxy-acid system is an effective method for degradation of PAHs. The results also demonstrated that the number of fused aromatic rings have not significant effects on PAH removal by proxy-acid method. The results also demonstrated that the number of fused aromatic rings have not significant effects on PAH removal by proxy-acid method.Keywords: proxy-acid treatment, silty-clay soil, PAHs, degradation
Procedia PDF Downloads 26918860 Critical Activity Effect on Project Duration in Precedence Diagram Method
Authors: Salman Ali Nisar, Koshi Suzuki
Abstract:
Precedence Diagram Method (PDM) with its additional relationships i.e., start-to-start, finish-to-finish, and start-to-finish, between activities provides more flexible schedule than traditional Critical Path Method (CPM). But, changing the duration of critical activities in PDM network will have anomalous effect on critical path. Researchers have proposed some classification of critical activity effects. In this paper, we do further study on classifications of critical activity effect and provide more information in detailed. Furthermore, we determine the maximum amount of time for each class of critical activity effect by which the project managers can control the dynamic feature (shortening/lengthening) of critical activities and project duration more efficiently.Keywords: construction project management, critical path method, project scheduling, precedence diagram method
Procedia PDF Downloads 51218859 An Indoor Positioning System in Wireless Sensor Networks with Measurement Delay
Authors: Pyung Soo Kim, Eung Hyuk Lee, Mun Suck Jang
Abstract:
In the current paper, an indoor positioning system is proposed with consideration of measurement delay. Firstly, an estimation filter with a measurement delay is designed for the indoor positioning mechanism under a weighted least square criterion, which utilizes only finite measurements on the most recent window. The proposed estimation filtering based scheme gives the filtered estimates for position, velocity and acceleration of moving target in real-time, while removing undesired noisy effects and preserving desired moving positions. Secondly, the proposed scheme is shown to have good inherent properties such as unbiasedness, efficiency, time-invariance, deadbeat, and robustness due to the finite memory structure. Finally, computer simulations shows that the performance of the proposed estimation filtering based scheme can outperform to the existing infinite memory filtering based mechanism.Keywords: indoor positioning system, wireless sensor networks, measurement delay
Procedia PDF Downloads 48418858 A Comparison of Methods for Estimating Dichotomous Treatment Effects: A Simulation Study
Authors: Jacqueline Y. Thompson, Sam Watson, Lee Middleton, Karla Hemming
Abstract:
Introduction: The odds ratio (estimated via logistic regression) is a well-established and common approach for estimating covariate-adjusted binary treatment effects when comparing a treatment and control group with dichotomous outcomes. Its popularity is primarily because of its stability and robustness to model misspecification. However, the situation is different for the relative risk and risk difference, which are arguably easier to interpret and better suited to specific designs such as non-inferiority studies. So far, there is no equivalent, widely acceptable approach to estimate an adjusted relative risk and risk difference when conducting clinical trials. This is partly due to the lack of a comprehensive evaluation of available candidate methods. Methods/Approach: A simulation study is designed to evaluate the performance of relevant candidate methods to estimate relative risks to represent conditional and marginal estimation approaches. We consider the log-binomial, generalised linear models (GLM) with iteratively weighted least-squares (IWLS) and model-based standard errors (SE); log-binomial GLM with convex optimisation and model-based SEs; log-binomial GLM with convex optimisation and permutation tests; modified-Poisson GLM IWLS and robust SEs; log-binomial generalised estimation equations (GEE) and robust SEs; marginal standardisation and delta method SEs; and marginal standardisation and permutation test SEs. Independent and identically distributed datasets are simulated from a randomised controlled trial to evaluate these candidate methods. Simulations are replicated 10000 times for each scenario across all possible combinations of sample sizes (200, 1000, and 5000), outcomes (10%, 50%, and 80%), and covariates (ranging from -0.05 to 0.7) representing weak, moderate or strong relationships. Treatment effects (ranging from 0, -0.5, 1; on the log-scale) will consider null (H0) and alternative (H1) hypotheses to evaluate coverage and power in realistic scenarios. Performance measures (bias, mean square error (MSE), relative efficiency, and convergence rates) are evaluated across scenarios covering a range of sample sizes, event rates, covariate prognostic strength, and model misspecifications. Potential Results, Relevance & Impact: There are several methods for estimating unadjusted and adjusted relative risks. However, it is unclear which method(s) is the most efficient, preserves type-I error rate, is robust to model misspecification, or is the most powerful when adjusting for non-prognostic and prognostic covariates. GEE estimations may be biased when the outcome distributions are not from marginal binary data. Also, it seems that marginal standardisation and convex optimisation may perform better than GLM IWLS log-binomial.Keywords: binary outcomes, statistical methods, clinical trials, simulation study
Procedia PDF Downloads 11518857 Connectomic Correlates of Cerebral Microhemorrhages in Mild Traumatic Brain Injury Victims with Neural and Cognitive Deficits
Authors: Kenneth A. Rostowsky, Alexander S. Maher, Nahian F. Chowdhury, Andrei Irimia
Abstract:
The clinical significance of cerebral microbleeds (CMBs) due to mild traumatic brain injury (mTBI) remains unclear. Here we use magnetic resonance imaging (MRI), diffusion tensor imaging (DTI) and connectomic analysis to investigate the statistical association between mTBI-related CMBs, post-TBI changes to the human connectome and neurological/cognitive deficits. This study was undertaken in agreement with US federal law (45 CFR 46) and was approved by the Institutional Review Board (IRB) of the University of Southern California (USC). Two groups, one consisting of 26 (13 females) mTBI victims and another comprising 26 (13 females) healthy control (HC) volunteers were recruited through IRB-approved procedures. The acute Glasgow Coma Scale (GCS) score was available for each mTBI victim (mean µ = 13.2; standard deviation σ = 0.4). Each HC volunteer was assigned a GCS of 15 to indicate the absence of head trauma at the time of enrollment in our study. Volunteers in the HC and mTBI groups were matched according to their sex and age (HC: µ = 67.2 years, σ = 5.62 years; mTBI: µ = 66.8 years, σ = 5.93 years). MRI [including T1- and T2-weighted volumes, gradient recalled echo (GRE)/susceptibility weighted imaging (SWI)] and gradient echo (GE) DWI volumes were acquired using the same MRI scanner type (Trio TIM, Siemens Corp.). Skull-stripping and eddy current correction were implemented. DWI volumes were processed in TrackVis (http://trackvis.org) and 3D Slicer (http://www.slicer.org). Tensors were fit to DWI data to perform DTI, and tractography streamlines were then reconstructed using deterministic tractography. A voxel classifier was used to identify image features as CMB candidates using Microbleed Anatomic Rating Scale (MARS) guidelines. For each peri-lesional DTI streamline bundle, the null hypothesis was formulated as the statement that there was no neurological or cognitive deficit associated with between-scan differences in the mean FA of DTI streamlines within each bundle. The statistical significance of each hypothesis test was calculated at the α = 0.05 level, subject to the family-wise error rate (FWER) correction for multiple comparisons. Results: In HC volunteers, the along-track analysis failed to identify statistically significant differences in the mean FA of DTI streamline bundles. In the mTBI group, significant differences in the mean FA of peri-lesional streamline bundles were found in 21 out of 26 volunteers. In those volunteers where significant differences had been found, these differences were associated with an average of ~47% of all identified CMBs (σ = 21%). In 12 out of the 21 volunteers exhibiting significant FA changes, cognitive functions (memory acquisition and retrieval, top-down control of attention, planning, judgment, cognitive aspects of decision-making) were found to have deteriorated over the six months following injury (r = -0.32, p < 0.001). Our preliminary results suggest that acute post-TBI CMBs may be associated with cognitive decline in some mTBI patients. Future research should attempt to identify mTBI patients at high risk for cognitive sequelae.Keywords: traumatic brain injury, magnetic resonance imaging, diffusion tensor imaging, connectomics
Procedia PDF Downloads 17118856 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton
Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani
Abstract:
Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton
Procedia PDF Downloads 32618855 Implementation of a Method of Crater Detection Using Principal Component Analysis in FPGA
Authors: Izuru Nomura, Tatsuya Takino, Yuji Kageyama, Shin Nagata, Hiroyuki Kamata
Abstract:
We propose a method of crater detection from the image of the lunar surface captured by the small space probe. We use the principal component analysis (PCA) to detect craters. Nevertheless, considering severe environment of the space, it is impossible to use generic computer in practice. Accordingly, we have to implement the method in FPGA. This paper compares FPGA and generic computer by the processing time of a method of crater detection using principal component analysis.Keywords: crater, PCA, eigenvector, strength value, FPGA, processing time
Procedia PDF Downloads 55818854 Detecting the Palaeochannels Based on Optical Data and High-Resolution Radar Data for Periyarriver Basin
Authors: S. Jayalakshmi, Gayathri S., Subiksa V., Nithyasri P., Agasthiya
Abstract:
Paleochannels are the buried part of an active river system which was separated from the active river channel by the process of cutoff or abandonment during the dynamic evolution of the active river. Over time, they are filled by young unconsolidated or semi-consolidated sediments. Additionally, it is impacted by geo morphological influences, lineament alterations, and other factors. The primary goal of this study is to identify the paleochannels in Periyar river basin for the year 2023. Those channels has a high probability in the presence of natural resources, including gold, platinum,tin,an duranium. Numerous techniques are used to map the paleochannel. Using the optical data, Satellite images were collected from various sources, which comprises multispectral satellite images from which indices such as Normalized Difference Vegetation Index (NDVI),Normalized Difference Water Index (NDWI), Soil Adjusted Vegetative Index (SAVI) and thematic layers such as Lithology, Stream Network, Lineament were prepared. Weights are assigned to each layer based on its importance, and overlay analysis has done, which concluded that the northwest region of the area has shown some paleochannel patterns. The results were cross-verified using the results obtained using microwave data. Using Sentinel data, Synthetic Aperture Radar (SAR) Image was extracted from European Space Agency (ESA) portal, pre-processed it using SNAP 6.0. In addition to that, Polarimetric decomposition technique has incorporated to detect the paleochannels based on its scattering property. Further, Principal component analysis has done for enhanced output imagery. Results obtained from optical and microwave radar data were compared and the location of paleochannels were detected. It resulted six paleochannels in the study area out of which three paleochannels were validated with the existing data published by Department of Geology and Environmental Science, Kerala. The other three paleochannels were newly detected with the help of SAR image.Keywords: paleochannels, optical data, SAR image, SNAP
Procedia PDF Downloads 9318853 MapReduce Logistic Regression Algorithms with RHadoop
Authors: Byung Ho Jung, Dong Hoon Lim
Abstract:
Logistic regression is a statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. Logistic regression is used extensively in numerous disciplines, including the medical and social science fields. In this paper, we address the problem of estimating parameters in the logistic regression based on MapReduce framework with RHadoop that integrates R and Hadoop environment applicable to large scale data. There exist three learning algorithms for logistic regression, namely Gradient descent method, Cost minimization method and Newton-Rhapson's method. The Newton-Rhapson's method does not require a learning rate, while gradient descent and cost minimization methods need to manually pick a learning rate. The experimental results demonstrated that our learning algorithms using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also compared the performance of our Newton-Rhapson's method with gradient descent and cost minimization methods. The results showed that our newton's method appeared to be the most robust to all data tested.Keywords: big data, logistic regression, MapReduce, RHadoop
Procedia PDF Downloads 28518852 Evaluation of the Effect of Learning Disabilities and Accommodations on the Prediction of the Exam Performance: Ordinal Decision-Tree Algorithm
Abstract:
Providing students with learning disabilities (LD) with extra time to grant them equal access to the exam is a necessary but insufficient condition to compensate for their LD; there should also be a clear indication that the additional time was actually used. For example, if students with LD use more time than students without LD and yet receive lower grades, this may indicate that a different accommodation is required. If they achieve higher grades but use the same amount of time, then the effectiveness of the accommodation has not been demonstrated. The main goal of this study is to evaluate the effect of including parameters related to LD and extended exam time, along with other commonly-used characteristics (e.g., student background and ability measures such as high-school grades), on the ability of ordinal decision-tree algorithms to predict exam performance. We use naturally-occurring data collected from hundreds of undergraduate engineering students. The sub-goals are i) to examine the improvement in prediction accuracy when the indicator of exam performance includes 'actual time used' in addition to the conventional indicator (exam grade) employed in most research; ii) to explore the effectiveness of extended exam time on exam performance for different courses and for LD students with different profiles (i.e., sets of characteristics). This is achieved by using the patterns (i.e., subgroups) generated by the algorithms to identify pairs of subgroups that differ in just one characteristic (e.g., course or type of LD) but have different outcomes in terms of exam performance (grade and time used). Since grade and time used to exhibit an ordering form, we propose a method based on ordinal decision-trees, which applies a weighted information-gain ratio (WIGR) measure for selecting the classifying attributes. Unlike other known ordinal algorithms, our method does not assume monotonicity in the data. The proposed WIGR is an extension of an information-theoretic measure, in the sense that it adjusts to the case of an ordinal target and takes into account the error severity between two different target classes. Specifically, we use ordinal C4.5, random-forest, and AdaBoost algorithms, as well as an ensemble technique composed of ordinal and non-ordinal classifiers. Firstly, we find that the inclusion of LD and extended exam-time parameters improves prediction of exam performance (compared to specifications of the algorithms that do not include these variables). Secondly, when the indicator of exam performance includes 'actual time used' together with grade (as opposed to grade only), the prediction accuracy improves. Thirdly, our subgroup analyses show clear differences in the effect of extended exam time on exam performance among different courses and different student profiles. From a methodological perspective, we find that the ordinal decision-tree based algorithms outperform their conventional, non-ordinal counterparts. Further, we demonstrate that the ensemble-based approach leverages the strengths of each type of classifier (ordinal and non-ordinal) and yields better performance than each classifier individually.Keywords: actual exam time usage, ensemble learning, learning disabilities, ordinal classification, time extension
Procedia PDF Downloads 10118851 An Optimized Method for 3D Magnetic Navigation of Nanoparticles inside Human Arteries
Authors: Evangelos G. Karvelas, Christos Liosis, Andreas Theodorakakos, Theodoros E. Karakasidis
Abstract:
In the present work, a numerical method for the estimation of the appropriate gradient magnetic fields for optimum driving of the particles into the desired area inside the human body is presented. The proposed method combines Computational Fluid Dynamics (CFD), Discrete Element Method (DEM) and Covariance Matrix Adaptation (CMA) evolution strategy for the magnetic navigation of nanoparticles. It is based on an iteration procedure that intents to eliminate the deviation of the nanoparticles from a desired path. Hence, the gradient magnetic field is constantly adjusted in a suitable way so that the particles’ follow as close as possible to a desired trajectory. Using the proposed method, it is obvious that the diameter of particles is crucial parameter for an efficient navigation. In addition, increase of particles' diameter decreases their deviation from the desired path. Moreover, the navigation method can navigate nanoparticles into the desired areas with efficiency approximately 99%.Keywords: computational fluid dynamics, CFD, covariance matrix adaptation evolution strategy, discrete element method, DEM, magnetic navigation, spherical particles
Procedia PDF Downloads 14218850 Meta-Instruction Theory in Mathematics Education and Critique of Bloom’s Theory
Authors: Abdollah Aliesmaeili
Abstract:
The purpose of this research is to present a different perspective on the basic math teaching method called meta-instruction, which reverses the learning path. Meta-instruction is a method of teaching in which the teaching trajectory starts from brain education into learning. This research focuses on the behavior of the mind during learning. In this method, students are not instructed in mathematics, but they are educated. Another goal of the research is to "criticize Bloom's classification in the cognitive domain and reverse it", because it cannot meet the educational and instructional needs of the new generation and "substituting math education instead of math teaching". This is an indirect method of teaching. The method of research is longitudinal through four years. Statistical samples included students ages 6 to 11. The research focuses on improving the mental abilities of children to explore mathematical rules and operations by playing only with eight measurements (any years 2 examinations). The results showed that there is a significant difference between groups in remembering, understanding, and applying. Moreover, educating math is more effective than instructing in overall learning abilities.Keywords: applying, Bloom's taxonomy, brain education, mathematics teaching method, meta-instruction, remembering, starmath method, understanding
Procedia PDF Downloads 2418849 Effect of Type of Pile and Its Installation Method on Pile Bearing Capacity by Physical Modelling in Frustum Confining Vessel
Authors: Seyed Abolhasan Naeini, M. Mortezaee
Abstract:
Various factors such as the method of installation, the pile type, the pile material and the pile shape, can affect the final bearing capacity of a pile executed in the soil; among them, the method of installation is of special importance. The physical modeling is among the best options in the laboratory study of the piles behavior. Therefore, the current paper first presents and reviews the frustum confining vesel (FCV) as a suitable tool for physical modeling of deep foundations. Then, by describing the loading tests of two open-ended and closed-end steel piles, each of which has been performed in two methods, “with displacement" and "without displacement", the effect of end conditions and installation method on the final bearing capacity of the pile is investigated. The soil used in the current paper is silty sand of Firoozkooh. The results of the experiments show that in general the without displacement installation method has a larger bearing capacity in both piles, and in a specific method of installation the closed ended pile shows a slightly higher bearing capacity.Keywords: physical modeling, frustum confining vessel, pile, bearing capacity, installation method
Procedia PDF Downloads 15318848 Seismic Fragility Functions of RC Moment Frames Using Incremental Dynamic Analyses
Authors: Seung-Won Lee, JongSoo Lee, Won-Jik Yang, Hyung-Joon Kim
Abstract:
A capacity spectrum method (CSM), one of methodologies to evaluate seismic fragilities of building structures, has been long recognized as the most convenient method, even if it contains several limitations to predict the seismic response of structures of interest. This paper proposes the procedure to estimate seismic fragility curves using an incremental dynamic analysis (IDA) rather than the method adopting a CSM. To achieve the research purpose, this study compares the seismic fragility curves of a 5-story reinforced concrete (RC) moment frame obtained from both methods, an IDA method and a CSM. Both seismic fragility curves are similar in slight and moderate damage states whereas the fragility curve obtained from the IDA method presents less variation (or uncertainties) in extensive and complete damage states. This is due to the fact that the IDA method can properly capture the structural response beyond yielding rather than the CSM and can directly calculate higher mode effects. From these observations, the CSM could overestimate seismic vulnerabilities of the studied structure in extensive or complete damage states.Keywords: seismic fragility curve, incremental dynamic analysis, capacity spectrum method, reinforced concrete moment frame
Procedia PDF Downloads 423