Search results for: estimation after selection
3644 Estimation of Maize Yield by Using a Process-Based Model and Remote Sensing Data in the Northeast China Plain
Authors: Jia Zhang, Fengmei Yao, Yanjing Tan
Abstract:
The accurate estimation of crop yield is of great importance for the food security. In this study, a process-based mechanism model was modified to estimate yield of C4 crop by modifying the carbon metabolic pathway in the photosynthesis sub-module of the RS-P-YEC (Remote-Sensing-Photosynthesis-Yield estimation for Crops) model. The yield was calculated by multiplying net primary productivity (NPP) and the harvest index (HI) derived from the ratio of grain to stalk yield. The modified RS-P-YEC model was used to simulate maize yield in the Northeast China Plain during the period 2002-2011. The statistical data of maize yield from study area was used to validate the simulated results at county-level. The results showed that the Pearson correlation coefficient (R) was 0.827 (P < 0.01) between the simulated yield and the statistical data, and the root mean square error (RMSE) was 712 kg/ha with a relative error (RE) of 9.3%. From 2002-2011, the yield of maize planting zone in the Northeast China Plain was increasing with smaller coefficient of variation (CV). The spatial pattern of simulated maize yield was consistent with the actual distribution in the Northeast China Plain, with an increasing trend from the northeast to the southwest. Hence the results demonstrated that the modified process-based model coupled with remote sensing data was suitable for yield prediction of maize in the Northeast China Plain at the spatial scale.Keywords: process-based model, C4 crop, maize yield, remote sensing, Northeast China Plain
Procedia PDF Downloads 3753643 Performance of an Absorption Refrigerator Using a Solar Thermal Collector
Authors: Abir Hmida, Nihel Chekir, Ammar Ben Brahim
Abstract:
In the present paper, we investigate the feasibility of a thermal solar driven cold room in Gabes, southern region of Tunisia. The cold room of 109 m3 is refrigerated using an ammonia absorption machine. It is destined to preserve dates during the hot months of the year. A detailed study of the cold room leads previously to the estimation of the cooling load of the proposed storage room in the operating conditions of the region. The next step consists of the estimation of the required heat in the generator of the absorption machine to ensure the desired cold temperature. A thermodynamic analysis was accomplished and complete description of the system is determined. We propose, here, to provide the needed heat thermally from the sun by using vacuum tube collectors. We found that at least 21m² of solar collectors are necessary to accomplish the work of the solar cold room.Keywords: absorption, ammonia, cold room, solar collector, vacuum tube
Procedia PDF Downloads 1753642 Calculational-Experimental Approach of Radiation Damage Parameters on VVER Equipment Evaluation
Authors: Pavel Borodkin, Nikolay Khrennikov, Azamat Gazetdinov
Abstract:
The problem of ensuring of VVER type reactor equipment integrity is now most actual in connection with justification of safety of the NPP Units and extension of their service life to 60 years and more. First of all, it concerns old units with VVER-440 and VVER-1000. The justification of the VVER equipment integrity depends on the reliability of estimation of the degree of the equipment damage. One of the mandatory requirements, providing the reliability of such estimation, and also evaluation of VVER equipment lifetime, is the monitoring of equipment radiation loading parameters. In this connection, there is a problem of justification of such normative parameters, used for an estimation of the pressure vessel metal embrittlement, as the fluence and fluence rate (FR) of fast neutrons above 0.5 MeV. From the point of view of regulatory practice, a comparison of displacement per atom (DPA) and fast neutron fluence (FNF) above 0.5 MeV has a practical concern. In accordance with the Russian regulatory rules, neutron fluence F(E > 0.5 MeV) is a radiation exposure parameter used in steel embrittlement prediction under neutron irradiation. However, the DPA parameter is a more physically legitimate quantity of neutron damage of Fe based materials. If DPA distribution in reactor structures is more conservative as neutron fluence, this case should attract the attention of the regulatory authority. The purpose of this work was to show what radiation load parameters (fluence, DPA) on all VVER equipment should be under control, and give the reasonable estimations of such parameters in the volume of all equipment. The second task is to give the conservative estimation of each parameter including its uncertainty. Results of recently received investigations allow to test the conservatism of calculational predictions, and, as it has been shown in the paper, combination of ex-vessel measured data with calculated ones allows to assess unpredicted uncertainties which are results of specific unique features of individual equipment for VVER reactor. Some results of calculational-experimental investigations are presented in this paper.Keywords: equipment integrity, fluence, displacement per atom, nuclear power plant, neutron activation measurements, neutron transport calculations
Procedia PDF Downloads 1573641 Improved Color-Based K-Mean Algorithm for Clustering of Satellite Image
Authors: Sangeeta Yadav, Mantosh Biswas
Abstract:
In this paper, we proposed an improved color based K-mean algorithm for clustering of satellite Image (SAR). Our method comprises of two stages. The first step is an interactive selection process where users are required to input the number of colors (ncolor), number of clusters, and then they are prompted to select the points in each color cluster. In the second step these points are given as input to K-mean clustering algorithm that clusters the image based on color and Minimum Square Euclidean distance. The proposed method reduces the mixed pixel problem to a great extent.Keywords: cluster, ncolor method, K-mean method, interactive selection process
Procedia PDF Downloads 2973640 Size-Reduction Strategies for Iris Codes
Authors: Jutta Hämmerle-Uhl, Georg Penn, Gerhard Pötzelsberger, Andreas Uhl
Abstract:
Iris codes contain bits with different entropy. This work investigates different strategies to reduce the size of iris code templates with the aim of reducing storage requirements and computational demand in the matching process. Besides simple sub-sampling schemes, also a binary multi-resolution representation as used in the JBIG hierarchical coding mode is assessed. We find that iris code template size can be reduced significantly while maintaining recognition accuracy. Besides, we propose a two stage identification approach, using small-sized iris code templates in a pre-selection satge, and full resolution templates for final identification, which shows promising recognition behaviour.Keywords: iris recognition, compact iris code, fast matching, best bits, pre-selection identification, two-stage identification
Procedia PDF Downloads 4403639 Estimation of Coefficients of Ridge and Principal Components Regressions with Multicollinear Data
Authors: Rajeshwar Singh
Abstract:
The presence of multicollinearity is common in handling with several explanatory variables simultaneously due to exhibiting a linear relationship among them. A great problem arises in understanding the impact of explanatory variables on the dependent variable. Thus, the method of least squares estimation gives inexact estimates. In this case, it is advised to detect its presence first before proceeding further. Using the ridge regression degree of its occurrence is reduced but principal components regression gives good estimates in this situation. This paper discusses well-known techniques of the ridge and principal components regressions and applies to get the estimates of coefficients by both techniques. In addition to it, this paper also discusses the conflicting claim on the discovery of the method of ridge regression based on available documents.Keywords: conflicting claim on credit of discovery of ridge regression, multicollinearity, principal components and ridge regressions, variance inflation factor
Procedia PDF Downloads 4203638 The Impact of Temperamental Traits of Candidates for Aviation School on Their Strategies for Coping with Stress during Selection Exams in Physical Education
Authors: Robert Jedrys, Zdzislaw Kobos, Justyna Skrzynska, Zbigniew Wochynski
Abstract:
Professions connected to aviation require an assessment of the suitability of health, psychological and psychomotor skills and overall physical fitness of the organism, who applies. Assessment of the physical condition is conducted by the committees consisting of aero-medical specialists in clinical medicine and aviation. In addition, psychological predispositions should be evaluated by specialized psychologists familiar with the specifics of the tasks and requirements for the various positions in aviation. Both, physical abilities and general physical fitness of candidates for aviation shall be assessed during the selection exams, which also test the ability to deal with stress what is very important in aviation. Hence, the mentioned exams in physical education not only help to judge on the ranking in candidates in terms of their efficiency and performance, but also allows to evaluate the functioning under stress measured using psychological tests. Moreover, before-test stress is a predictors of successfulness in the next stages of education and practical training in the aviation. The aim of the study was to evaluate the influence of temperamental traits on strategies used for coping with stress during selection exams in physical education, deciding on admission to aviation school. The study involved 30 candidates for fighter pilot training in aviation school . To evaluate the temperament 'The Formal Characteristics of Behavior-Temperament Inventory' (FCB-TI) by B. Zawadzki and J.Strelau was used. To determine the pattern of coping with stress 'The Coping Inventory for Stressful Situations' (CISS) to N. S. Endler and J. D. A. Parker were engaged. Study of temperament and styles of coping with stress was conducted directly before the exam selection of physical education. The results were analyzed with 'Statistica 9' program. The studies showed that:-There is a negative correlation between such a temperament feature as 'perseverance' and preferred style of coping with stress concentrated on the task (r = -0.590; p < 0.004); -There is a positive correlation between such a feature of temperament as 'emotional reactivity,' and preference to deal with a stressful situation with ‘style centered on emotions’ (r = 0.520; p <0.011); -There is a negative correlation between such a feature of temperament as ‘strength’ and ‘style of coping with stress concentrated on emotions’ (r = -0.580; p < 0.004). Studies indicate that temperament traits determine the perception of stress and preferred coping styles used during the selection, as during the exams in physical education.Keywords: aviation, physical education, stress, temperamental traits
Procedia PDF Downloads 2573637 Partner Selection in International Strategic Alliances: The Case of the Information Industry
Authors: H. Nakamura
Abstract:
This study analyzes international strategic alliances in the information industry. The purpose of this study is to clarify the strategic intention of an international alliance. Secondly, it investigates the influence of differences in the target markets of partner companies on alliances. Using an international strategy theory approach to analyze the global strategies of global companies, the study compares a database business and an electronic publishing business. In particular, these cases emphasized factors attributable to "people" and "learning", reliability and communication between organizations and the evolution of the IT infrastructure. The theory evolved in this study validates the effectiveness of these strategies.Keywords: database business, electronic library, international strategic alliances, partner selection
Procedia PDF Downloads 3723636 Quantification of Methane Emissions from Solid Waste in Oman Using IPCC Default Methodology
Authors: Wajeeha A. Qazi, Mohammed-Hasham Azam, Umais A. Mehmood, Ghithaa A. Al-Mufragi, Noor-Alhuda Alrawahi, Mohammed F. M. Abushammala
Abstract:
Municipal Solid Waste (MSW) disposed in landfill sites decompose under anaerobic conditions and produce gases which mainly contain carbon dioxide (CO₂) and methane (CH₄). Methane has the potential of causing global warming 25 times more than CO₂, and can potentially affect human life and environment. Thus, this research aims to determine MSW generation and the annual CH₄ emissions from the generated waste in Oman over the years 1971-2030. The estimation of total waste generation was performed using existing models, while the CH₄ emissions estimation was performed using the intergovernmental panel on climate change (IPCC) default method. It is found that total MSW generation in Oman might be reached 3,089 Gg in the year 2030, which approximately produced 85 Gg of CH₄ emissions in the year 2030.Keywords: methane, emissions, landfills, solid waste
Procedia PDF Downloads 5103635 Heavy Metals Estimation in Coastal Areas Using Remote Sensing, Field Sampling and Classical and Robust Statistic
Authors: Elena Castillo-López, Raúl Pereda, Julio Manuel de Luis, Rubén Pérez, Felipe Piña
Abstract:
Sediments are an important source of accumulation of toxic contaminants within the aquatic environment. Bioassays are a powerful tool for the study of sediments in relation to their toxicity, but they can be expensive. This article presents a methodology to estimate the main physical property of intertidal sediments in coastal zones: heavy metals concentration. This study, which was developed in the Bay of Santander (Spain), applies classical and robust statistic to CASI-2 hyperspectral images to estimate heavy metals presence and ecotoxicity (TOC). Simultaneous fieldwork (radiometric and chemical sampling) allowed an appropriate atmospheric correction to CASI-2 images.Keywords: remote sensing, intertidal sediment, airborne sensors, heavy metals, eTOCoxicity, robust statistic, estimation
Procedia PDF Downloads 4223634 Fatigue Life Estimation of Spiral Welded Waterworks Pipelines
Authors: Suk Woo Hong, Chang Sung Seok, Jae Mean Koo
Abstract:
Recently, the welding is widely used in modern industry for joining the structures. However, the waterworks pipes are exposed to the fatigue load by cars, earthquake and etc because of being buried underground. Moreover, the residual stress exists in weld zone by welding process and it is well known that the fatigue life of welded structures is degraded by residual stress. Due to such reasons, the crack can occur in the weld zone of pipeline. In this case, The ground subsidence or sinkhole can occur, if the soil and sand are washed down by fluid leaked from the crack of water pipe. These problems can lead to property damage and endangering lives. For these reasons, the estimation of fatigue characteristics for water pipeline weld zone is needed. Therefore, in this study, for fatigue characteristics estimation of spiral welded waterworks pipe, ASTM standard specimens and Curved Plate specimens were collected from the spiral welded waterworks pipe and the fatigue tests were performed. The S-N curves of each specimen were estimated, and then the fatigue life of weldment Curved Plate specimen was predicted by theoretical and analytical methods. After that, the weldment Curved Plate specimens were collected from the pipe and verification fatigue tests were performed. Finally, it was verified that the predicted S-N curve of weldment Curved Plate specimen was good agreement with fatigue test data.Keywords: spiral welded pipe, prediction fatigue life, endurance limit modifying factors, residual stress
Procedia PDF Downloads 2993633 Understanding the Classification of Rain Microstructure and Estimation of Z-R Relationship using a Micro Rain Radar in Tropical Region
Authors: Tomiwa, Akinyemi Clement
Abstract:
Tropical regions experience diverse and complex precipitation patterns, posing significant challenges for accurate rainfall estimation and forecasting. This study addresses the problem of effectively classifying tropical rain types and refining the Z-R (Reflectivity-Rain Rate) relationship to enhance rainfall estimation accuracy. Through a combination of remote sensing, meteorological analysis, and machine learning, the research aims to develop an advanced classification framework capable of distinguishing between different types of tropical rain based on their unique characteristics. This involves utilizing high-resolution satellite imagery, radar data, and atmospheric parameters to categorize precipitation events into distinct classes, providing a comprehensive understanding of tropical rain systems. Additionally, the study seeks to improve the Z-R relationship, a crucial aspect of rainfall estimation. One year of rainfall data was analyzed using a Micro Rain Radar (MRR) located at The Federal University of Technology Akure, Nigeria, measuring rainfall parameters from ground level to a height of 4.8 km with a vertical resolution of 0.16 km. Rain rates were classified into low (stratiform) and high (convective) based on various microstructural attributes such as rain rates, liquid water content, Drop Size Distribution (DSD), average fall speed of the drops, and radar reflectivity. By integrating diverse datasets and employing advanced statistical techniques, the study aims to enhance the precision of Z-R models, offering a more reliable means of estimating rainfall rates from radar reflectivity data. This refined Z-R relationship holds significant potential for improving our understanding of tropical rain systems and enhancing forecasting accuracy in regions prone to heavy precipitation.Keywords: remote sensing, precipitation, drop size distribution, micro rain radar
Procedia PDF Downloads 343632 Factors Affecting Households' Decision to Allocate Credit for Livestock Production: Evidence from Ethiopia
Authors: Kaleb Shiferaw, Berhanu Geberemedhin, Dereje Legesse
Abstract:
Access to credit is often viewed as a key to transform semi-subsistence smallholders into market oriented producers. However, only a few studies have examined factors that affect farmers’ decision to allocate credit on farm activities in general and livestock production in particular. A trivariate probit model with double selection is employed to identify factors that affect farmers’ decision to allocate credit on livestock production using data collected from smallholder farmers in Ethiopia. After controlling for two sample selection bias – taking credit for the production season and decision to allocate credit on farm activities – land ownership and access to a livestock centered extension service are found to have a significant (p<0.001) effect on farmers decision to use credit for livestock production. The result showed farmers with large land holding, and access to a livestock centered extension services are more likely to utilize credit for livestock production. However since the effect of land ownership squared is negative the effect of land ownership for those who own a large plot of land lessens. The study highlights the fact that improving access to credit does not automatically translate into more productive households. Improving farmers’ access to credit should be followed by a focused extension services.Keywords: livestock production, credit access, credit allocation, household decision, double sample selection
Procedia PDF Downloads 3273631 Features Reduction Using Bat Algorithm for Identification and Recognition of Parkinson Disease
Authors: P. Shrivastava, A. Shukla, K. Verma, S. Rungta
Abstract:
Parkinson's disease is a chronic neurological disorder that directly affects human gait. It leads to slowness of movement, causes muscle rigidity and tremors. Gait serve as a primary outcome measure for studies aiming at early recognition of disease. Using gait techniques, this paper implements efficient binary bat algorithm for an early detection of Parkinson's disease by selecting optimal features required for classification of affected patients from others. The data of 166 people, both fit and affected is collected and optimal feature selection is done using PSO and Bat algorithm. The reduced dataset is then classified using neural network. The experiments indicate that binary bat algorithm outperforms traditional PSO and genetic algorithm and gives a fairly good recognition rate even with the reduced dataset.Keywords: parkinson, gait, feature selection, bat algorithm
Procedia PDF Downloads 5453630 Methodology: A Review in Modelling and Predictability of Embankment in Soft Ground
Authors: Bhim Kumar Dahal
Abstract:
Transportation network development in the developing country is in rapid pace. The majority of the network belongs to railway and expressway which passes through diverse topography, landform and geological conditions despite the avoidance principle during route selection. Construction of such networks demand many low to high embankment which required improvement in the foundation soil. This paper is mainly focused on the various advanced ground improvement techniques used to improve the soft soil, modelling approach and its predictability for embankments construction. The ground improvement techniques can be broadly classified in to three groups i.e. densification group, drainage and consolidation group and reinforcement group which are discussed with some case studies. Various methods were used in modelling of the embankments from simple 1-dimensional to complex 3-dimensional model using variety of constitutive models. However, the reliability of the predictions is not found systematically improved with the level of sophistication. And sometimes the predictions are deviated more than 60% to the monitored value besides using same level of erudition. This deviation is found mainly due to the selection of constitutive model, assumptions made during different stages, deviation in the selection of model parameters and simplification during physical modelling of the ground condition. This deviation can be reduced by using optimization process, optimization tools and sensitivity analysis of the model parameters which will guide to select the appropriate model parameters.Keywords: cement, improvement, physical properties, strength
Procedia PDF Downloads 1743629 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images
Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir
Abstract:
The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement; On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.Keywords: altitude estimation, drone, image processing, trajectory planning
Procedia PDF Downloads 1133628 Habitat Use by Persian Gazelle (Gazella subgutturosa) in Bydoye Protected Area, Iran
Authors: S. Aghanajafizadeh, M. Poursina
Abstract:
We studied the selection of winter habitat by Persian Gazelle (Gazella subguttrosa) in Bydoyeh protected area. Habitat variables such as plant species number, vegetation percent, distance to the nearest water sources and plant patch of present sites were compared with randomly selected non- used sites. The results showed that the most important factors influencing habitat selection were number and vegetation percent of Artemisia sieberi. Vegetation percent of plants. vegetation percent and number of Artemisia sieberi were significantly higher compared with the control area.Keywords: Persian gazelle, habitat use, Bydoyeh protected area, Kerman, Iran
Procedia PDF Downloads 3813627 Designing a Cricket Team Selection Method Using Super-Efficient DEA and Semi Variance Approach
Authors: Arnab Adhikari, Adrija Majumdar, Gaurav Gupta, Arnab Bisi
Abstract:
Team formation plays an instrumental role in the sports like cricket. Existing literature reveals that most of the works on player selection focus only on the players’ efficiency and ignore the consistency. It motivates us to design an improved player selection method based on both player’s efficiency and consistency. To measure the players’ efficiency measurement, we employ a modified data envelopment analysis (DEA) technique namely ‘super-efficient DEA model’. We design a modified consistency index based on semi variance approach. Here, we introduce a new parameter called ‘fitness index’ for consistency computation to assess a player’s fitness level. Finally, we devise a single performance score using both efficiency score and consistency score with the help of a linear programming model. To test the robustness of our method, we perform a rigorous numerical analysis to determine the all-time best One Day International (ODI) Cricket XI. Next, we conduct extensive comparative studies regarding efficiency scores, consistency scores, selected team between the existing methods and the proposed method and explain the rationale behind the improvement.Keywords: decision support systems, sports, super-efficient data envelopment analysis, semi variance approach
Procedia PDF Downloads 3993626 Marker Assisted Selection of Rice Genotypes for Xa5 and Xa13 Bacterial Leaf Blight Resistance Genes
Authors: P. Sindhumole, K. Soumya, R. Renjimol
Abstract:
Rice (Oryza sativa L.) is the major staple food crop over the world. It is prone to a number of biotic and abiotic stresses, out of which Bacterial Leaf Blight (BLB), caused by Xanthomonas oryzae pv. oryzae, is the most rampant. Management of this disease through chemicals or any other means is very difficult. The best way to control BLB is by the development of Host Plant Resistance. BLB resistance is not an activity of a single gene but it involves a cluster of more than thirty genes reported. Among these, Xa5 and Xa13 genes are two important ones, which can be diagnosed through marker assisted selection using closely linked molecular markers. During 2014, the first phase of field screening using forty traditional rice genotypes was carried out and twenty resistant symptomless genotypes were identified. Molecular characterisation of these genotypes using RM 122 SSR marker revealed the presence of Xa5 gene in thirteen genotypes. Forty-two traditional rice genotypes were used for the second phase of field screening for BLB resistance. Among these, sixteen resistant genotypes were identified. These genotypes, along with two susceptible check genotypes, were subjected to marker assisted selection for Xa13 gene, using the linked STS marker RG-136. During this process, presence of Xa13 gene could be detected in ten resistant genotypes. In future, these selected genotypes can be directly utilised as donors in Marker assisted breeding programmes for BLB resistance in rice.Keywords: oryza sativa, SSR, STS, marker, disease, breeding
Procedia PDF Downloads 3953625 Non-Linear Regression Modeling for Composite Distributions
Authors: Mostafa Aminzadeh, Min Deng
Abstract:
Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions
Procedia PDF Downloads 333624 A Semiparametric Approach to Estimate the Mode of Continuous Multivariate Data
Authors: Tiee-Jian Wu, Chih-Yuan Hsu
Abstract:
Mode estimation is an important task, because it has applications to data from a wide variety of sources. We propose a semi-parametric approach to estimate the mode of an unknown continuous multivariate density function. Our approach is based on a weighted average of a parametric density estimate using the Box-Cox transform and a non-parametric kernel density estimate. Our semi-parametric mode estimate improves both the parametric- and non-parametric- mode estimates. Specifically, our mode estimate solves the non-consistency problem of parametric mode estimates (at large sample sizes) and reduces the variability of non-parametric mode estimates (at small sample sizes). The performance of our method at practical sample sizes is demonstrated by simulation examples and two real examples from the fields of climatology and image recognition.Keywords: Box-Cox transform, density estimation, mode seeking, semiparametric method
Procedia PDF Downloads 2853623 Short-Term Effects of an Open Monitoring Meditation on Cognitive Control and Information Processing
Authors: Sarah Ullrich, Juliane Rolle, Christian Beste, Nicole Wolff
Abstract:
Inhibition and cognitive flexibility are essential parts of executive functions in our daily lives, as they enable the avoidance of unwanted responses or selectively switch between mental processes to generate appropriate behavior. There is growing interest in improving inhibition and response selection through brief mindfulness-based meditations. Arguably, open-monitoring meditation (OMM) improves inhibitory and flexibility performance by optimizing cognitive control and information processing. Yet, the underlying neurophysiological processes have been poorly studied. Using the Simon-Go/Nogo paradigm, the present work examined the effect of a single 15-minute smartphone app-based OMM on inhibitory performance and response selection in meditation novices. We used both behavioral and neurophysiological measures (event-related potentials, ERPs) to investigate which subprocesses of response selection and inhibition are altered after OMM. The study was conducted in a randomized crossover design with N = 32 healthy adults. We thereby investigated Go and Nogo trials in the paradigm. The results show that as little as 15 minutes of OMM can improve response selection and inhibition at behavioral and neurophysiological levels. More specifically, OMM reduces the rate of false alarms, especially during Nogo trials regardless of congruency. It appears that OMM optimizes conflict processing and response inhibition compared to no meditation, also reflected in the ERP N2 and P3 time windows. The results may be explained by the meta control model, which argues in terms of a specific processing mode with increased flexibility and inclusive decision-making under OMM. Importantly, however, the effects of OMM were only evident when there was the prior experience with the task. It is likely that OMM provides more cognitive resources, as the amplitudes of these EKPs decreased. OMM novices seem to induce finer adjustments during conflict processing after familiarization with the task.Keywords: EEG, inhibition, meditation, Simon Nogo
Procedia PDF Downloads 2113622 Lipschitz Classifiers Ensembles: Usage for Classification of Target Events in C-OTDR Monitoring Systems
Authors: Andrey V. Timofeev
Abstract:
This paper introduces an original method for guaranteed estimation of the accuracy of an ensemble of Lipschitz classifiers. The solution was obtained as a finite closed set of alternative hypotheses, which contains an object of classification with a probability of not less than the specified value. Thus, the classification is represented by a set of hypothetical classes. In this case, the smaller the cardinality of the discrete set of hypothetical classes is, the higher is the classification accuracy. Experiments have shown that if the cardinality of the classifiers ensemble is increased then the cardinality of this set of hypothetical classes is reduced. The problem of the guaranteed estimation of the accuracy of an ensemble of Lipschitz classifiers is relevant in the multichannel classification of target events in C-OTDR monitoring systems. Results of suggested approach practical usage to accuracy control in C-OTDR monitoring systems are present.Keywords: Lipschitz classifiers, confidence set, C-OTDR monitoring, classifiers accuracy, classifiers ensemble
Procedia PDF Downloads 4923621 From Type-I to Type-II Fuzzy System Modeling for Diagnosis of Hepatitis
Authors: Shahabeddin Sotudian, M. H. Fazel Zarandi, I. B. Turksen
Abstract:
Hepatitis is one of the most common and dangerous diseases that affects humankind, and exposes millions of people to serious health risks every year. Diagnosis of Hepatitis has always been a challenge for physicians. This paper presents an effective method for diagnosis of hepatitis based on interval Type-II fuzzy. This proposed system includes three steps: pre-processing (feature selection), Type-I and Type-II fuzzy classification, and system evaluation. KNN-FD feature selection is used as the preprocessing step in order to exclude irrelevant features and to improve classification performance and efficiency in generating the classification model. In the fuzzy classification step, an “indirect approach” is used for fuzzy system modeling by implementing the exponential compactness and separation index for determining the number of rules in the fuzzy clustering approach. Therefore, we first proposed a Type-I fuzzy system that had an accuracy of approximately 90.9%. In the proposed system, the process of diagnosis faces vagueness and uncertainty in the final decision. Thus, the imprecise knowledge was managed by using interval Type-II fuzzy logic. The results that were obtained show that interval Type-II fuzzy has the ability to diagnose hepatitis with an average accuracy of 93.94%. The classification accuracy obtained is the highest one reached thus far. The aforementioned rate of accuracy demonstrates that the Type-II fuzzy system has a better performance in comparison to Type-I and indicates a higher capability of Type-II fuzzy system for modeling uncertainty.Keywords: hepatitis disease, medical diagnosis, type-I fuzzy logic, type-II fuzzy logic, feature selection
Procedia PDF Downloads 3063620 Different Sampling Schemes for Semi-Parametric Frailty Model
Authors: Nursel Koyuncu, Nihal Ata Tutkun
Abstract:
Frailty model is a survival model that takes into account the unobserved heterogeneity for exploring the relationship between the survival of an individual and several covariates. In the recent years, proposed survival models become more complex and this feature causes convergence problems especially in large data sets. Therefore selection of sample from these big data sets is very important for estimation of parameters. In sampling literature, some authors have defined new sampling schemes to predict the parameters correctly. For this aim, we try to see the effect of sampling design in semi-parametric frailty model. We conducted a simulation study in R programme to estimate the parameters of semi-parametric frailty model for different sample sizes, censoring rates under classical simple random sampling and ranked set sampling schemes. In the simulation study, we used data set recording 17260 male Civil Servants aged 40–64 years with complete 10-year follow-up as population. Time to death from coronary heart disease is treated as a survival-time and age, systolic blood pressure are used as covariates. We select the 1000 samples from population using different sampling schemes and estimate the parameters. From the simulation study, we concluded that ranked set sampling design performs better than simple random sampling for each scenario.Keywords: frailty model, ranked set sampling, efficiency, simple random sampling
Procedia PDF Downloads 2113619 GIS Application in Surface Runoff Estimation for Upper Klang River Basin, Malaysia
Authors: Suzana Ramli, Wardah Tahir
Abstract:
Estimation of surface runoff depth is a vital part in any rainfall-runoff modeling. It leads to stream flow calculation and later predicts flood occurrences. GIS (Geographic Information System) is an advanced and opposite tool used in simulating hydrological model due to its realistic application on topography. The paper discusses on calculation of surface runoff depth for two selected events by using GIS with Curve Number method for Upper Klang River basin. GIS enables maps intersection between soil type and land use that later produces curve number map. The results show good correlation between simulated and observed values with more than 0.7 of R2. Acceptable performance of statistical measurements namely mean error, absolute mean error, RMSE, and bias are also deduced in the paper.Keywords: surface runoff, geographic information system, curve number method, environment
Procedia PDF Downloads 2823618 Use of Gaussian-Euclidean Hybrid Function Based Artificial Immune System for Breast Cancer Diagnosis
Authors: Cuneyt Yucelbas, Seral Ozsen, Sule Yucelbas, Gulay Tezel
Abstract:
Due to the fact that there exist only a small number of complex systems in artificial immune system (AIS) that work out nonlinear problems, nonlinear AIS approaches, among the well-known solution techniques, need to be developed. Gaussian function is usually used as similarity estimation in classification problems and pattern recognition. In this study, diagnosis of breast cancer, the second type of the most widespread cancer in women, was performed with different distance calculation functions that euclidean, gaussian and gaussian-euclidean hybrid function in the clonal selection model of classical AIS on Wisconsin Breast Cancer Dataset (WBCD), which was taken from the University of California, Irvine Machine-Learning Repository. We used 3-fold cross validation method to train and test the dataset. According to the results, the maximum test classification accuracy was reported as 97.35% by using of gaussian-euclidean hybrid function for fold-3. Also, mean of test classification accuracies for all of functions were obtained as 94.78%, 94.45% and 95.31% with use of euclidean, gaussian and gaussian-euclidean, respectively. With these results, gaussian-euclidean hybrid function seems to be a potential distance calculation method, and it may be considered as an alternative distance calculation method for hard nonlinear classification problems.Keywords: artificial immune system, breast cancer diagnosis, Euclidean function, Gaussian function
Procedia PDF Downloads 4353617 Nonparametric Sieve Estimation with Dependent Data: Application to Deep Neural Networks
Authors: Chad Brown
Abstract:
This paper establishes general conditions for the convergence rates of nonparametric sieve estimators with dependent data. We present two key results: one for nonstationary data and another for stationary mixing data. Previous theoretical results often lack practical applicability to deep neural networks (DNNs). Using these conditions, we derive convergence rates for DNN sieve estimators in nonparametric regression settings with both nonstationary and stationary mixing data. The DNN architectures considered adhere to current industry standards, featuring fully connected feedforward networks with rectified linear unit activation functions, unbounded weights, and a width and depth that grows with sample size.Keywords: sieve extremum estimates, nonparametric estimation, deep learning, neural networks, rectified linear unit, nonstationary processes
Procedia PDF Downloads 413616 Optimized Cluster Head Selection Algorithm Based on LEACH Protocol for Wireless Sensor Networks
Authors: Wided Abidi, Tahar Ezzedine
Abstract:
Low-Energy Adaptive Clustering Hierarchy (LEACH) has been considered as one of the effective hierarchical routing algorithms that optimize energy and prolong the lifetime of network. Since the selection of Cluster Head (CH) in LEACH is carried out randomly, in this paper, we propose an approach of electing CH based on LEACH protocol. In other words, we present a formula for calculating the threshold responsible for CH election. In fact, we adopt three principle criteria: the remaining energy of node, the number of neighbors within cluster range and the distance between node and CH. Simulation results show that our proposed approach beats LEACH protocol in regards of prolonging the lifetime of network and saving residual energy.Keywords: wireless sensors networks, LEACH protocol, cluster head election, energy efficiency
Procedia PDF Downloads 3293615 Time of Week Intensity Estimation from Interval Censored Data with Application to Police Patrol Planning
Authors: Jiahao Tian, Michael D. Porter
Abstract:
Law enforcement agencies are tasked with crime prevention and crime reduction under limited resources. Having an accurate temporal estimate of the crime rate would be valuable to achieve such a goal. However, estimation is usually complicated by the interval-censored nature of crime data. We cast the problem of intensity estimation as a Poisson regression using an EM algorithm to estimate the parameters. Two special penalties are added that provide smoothness over the time of day and day of the week. This approach presented here provides accurate intensity estimates and can also uncover day-of-week clusters that share the same intensity patterns. Anticipating where and when crimes might occur is a key element to successful policing strategies. However, this task is complicated by the presence of interval-censored data. The censored data refers to the type of data that the event time is only known to lie within an interval instead of being observed exactly. This type of data is prevailing in the field of criminology because of the absence of victims for certain types of crime. Despite its importance, the research in temporal analysis of crime has lagged behind the spatial component. Inspired by the success of solving crime-related problems with a statistical approach, we propose a statistical model for the temporal intensity estimation of crime with censored data. The model is built on Poisson regression and has special penalty terms added to the likelihood. An EM algorithm was derived to obtain maximum likelihood estimates, and the resulting model shows superior performance to the competing model. Our research is in line with the smart policing initiative (SPI) proposed by the Bureau Justice of Assistance (BJA) as an effort to support law enforcement agencies in building evidence-based, data-driven law enforcement tactics. The goal is to identify strategic approaches that are effective in crime prevention and reduction. In our case, we allow agencies to deploy their resources for a relatively short period of time to achieve the maximum level of crime reduction. By analyzing a particular area within cities where data are available, our proposed approach could not only provide an accurate estimate of intensities for the time unit considered but a time-variation crime incidence pattern. Both will be helpful in the allocation of limited resources by either improving the existing patrol plan with the understanding of the discovery of the day of week cluster or supporting extra resources available.Keywords: cluster detection, EM algorithm, interval censoring, intensity estimation
Procedia PDF Downloads 66