Search results for: radial distribution functions
6549 Bird-Adapted Filter for Avian Species and Individual Identification Systems Improvement
Authors: Ladislav Ptacek, Jan Vanek, Jan Eisner, Alexandra Pruchova, Pavel Linhart, Ludek Muller, Dana Jirotkova
Abstract:
One of the essential steps of avian song processing is signal filtering. Currently, the standard methods of filtering are the Mel Bank Filter or linear filter distribution. In this article, a new type of bank filter called the Bird-Adapted Filter is introduced; whereby the signal filtering is modifiable, based upon a new mathematical description of audiograms for particular bird species or order, which was named the Avian Audiogram Unified Equation. According to the method, filters may be deliberately distributed by frequency. The filters are more concentrated in bands of higher sensitivity where there is expected to be more information transmitted and vice versa. Further, it is demonstrated a comparison of various filters for automatic individual recognition of chiffchaff (Phylloscopus collybita). The average Equal Error Rate (EER) value for Linear bank filter was 16.23%, for Mel Bank Filter 18.71%, the Bird-Adapted Filter gave 14.29%, and Bird-Adapted Filter with 1/3 modification was 12.95%. This approach would be useful for practical use in automatic systems for avian species and individual identification. Since the Bird-Adapted Filter filtration is based on the measured audiograms of particular species or orders, selecting the distribution according to the avian vocalization provides the most precise filter distribution to date.Keywords: avian audiogram, bird individual identification, bird song processing, bird species recognition, filter bank
Procedia PDF Downloads 3886548 The Size Effects of Keyboards (Keycaps) on Computer Typing Tasks
Authors: Chih-Chun Lai, Jun-Yu Wang
Abstract:
The keyboard is the most important equipment for computer tasks. However, improper design of keyboard would cause some symptoms like ulnar and/or radial deviations. The research goal of this study was to investigate the optimal size(s) of keycaps to increase efficiency. As shown in the questionnaire pre-study with 49 participants aged from 20 to 44, the most commonly used keyboards were 101-key standard keyboards. Most of the keycap sizes (W × L) were 1.3 × 1.5 cm and 1.5 × 1.5 cm. The fingertip breadths of most participants were 1.2 cm. Therefore, in the main study with 18 participants, a standard keyboard with each set of the 3-sized (1.2 × 1.4 cm, 1.3 × 1.5 cm, and 1.5 × 1.5 cm) keycaps was used to investigate their typing efficiency, respectively. The results revealed that the differences between the operating times for using 1.3 × 1.5 cm and 1.2 × 1.4 cm keycaps were insignificant while operating times for using 1.5 × 1.5 cm keycaps were significantly longer than for using 1.2 × 1.4 cm or 1.3 × 1.5 cm, respectively. As for the typing error rate, there was no significant difference.Keywords: keyboard, keycap size, typing efficiency, computer tasks
Procedia PDF Downloads 3836547 An Application of Modified M-out-of-N Bootstrap Method to Heavy-Tailed Distributions
Authors: Hannah F. Opayinka, Adedayo A. Adepoju
Abstract:
This study is an extension of a prior study on the modification of the existing m-out-of-n (moon) bootstrap method for heavy-tailed distributions in which modified m-out-of-n (mmoon) was proposed as an alternative method to the existing moon technique. In this study, both moon and mmoon techniques were applied to two real income datasets which followed Lognormal and Pareto distributions respectively with finite variances. The performances of these two techniques were compared using Standard Error (SE) and Root Mean Square Error (RMSE). The findings showed that mmoon outperformed moon bootstrap in terms of smaller SEs and RMSEs for all the sample sizes considered in the two datasets.Keywords: Bootstrap, income data, lognormal distribution, Pareto distribution
Procedia PDF Downloads 1866546 A Nonlocal Means Algorithm for Poisson Denoising Based on Information Geometry
Authors: Dongxu Chen, Yipeng Li
Abstract:
This paper presents an information geometry NonlocalMeans(NLM) algorithm for Poisson denoising. NLM estimates a noise-free pixel as a weighted average of image pixels, where each pixel is weighted according to the similarity between image patches in Euclidean space. In this work, every pixel is a Poisson distribution locally estimated by Maximum Likelihood (ML), all distributions consist of a statistical manifold. A NLM denoising algorithm is conducted on the statistical manifold where Fisher information matrix can be used for computing distribution geodesics referenced as the similarity between patches. This approach was demonstrated to be competitive with related state-of-the-art methods.Keywords: image denoising, Poisson noise, information geometry, nonlocal-means
Procedia PDF Downloads 2856545 Development of Earthquake and Typhoon Loss Models for Japan, Specifically Designed for Underwriting and Enterprise Risk Management Cycles
Authors: Nozar Kishi, Babak Kamrani, Filmon Habte
Abstract:
Natural hazards such as earthquakes and tropical storms, are very frequent and highly destructive in Japan. Japan experiences, every year on average, more than 10 tropical cyclones that come within damaging reach, and earthquakes of moment magnitude 6 or greater. We have developed stochastic catastrophe models to address the risk associated with the entire suite of damaging events in Japan, for use by insurance, reinsurance, NGOs and governmental institutions. KCC’s (Karen Clark and Company) catastrophe models are procedures constituted of four modular segments: 1) stochastic events sets that would represent the statistics of the past events, hazard attenuation functions that could model the local intensity, vulnerability functions that would address the repair need for local buildings exposed to the hazard, and financial module addressing policy conditions that could estimates the losses incurring as result of. The events module is comprised of events (faults or tracks) with different intensities with corresponding probabilities. They are based on the same statistics as observed through the historical catalog. The hazard module delivers the hazard intensity (ground motion or wind speed) at location of each building. The vulnerability module provides library of damage functions that would relate the hazard intensity to repair need as percentage of the replacement value. The financial module reports the expected loss, given the payoff policies and regulations. We have divided Japan into regions with similar typhoon climatology, and earthquake micro-zones, within each the characteristics of events are similar enough for stochastic modeling. For each region, then, a set of stochastic events is developed that results in events with intensities corresponding to annual occurrence probabilities that are of interest to financial communities; such as 0.01, 0.004, etc. The intensities, corresponding to these probabilities (called CE, Characteristics Events) are selected through a superstratified sampling approach that is based on the primary uncertainty. Region specific hazard intensity attenuation functions followed by vulnerability models leads to estimation of repair costs. Extensive economic exposure model addresses all local construction and occupancy types, such as post-linter Shinand Okabe wood, as well as concrete confined in steel, SRC (Steel-Reinforced Concrete), high-rise.Keywords: typhoon, earthquake, Japan, catastrophe modelling, stochastic modeling, stratified sampling, loss model, ERM
Procedia PDF Downloads 2716544 Fairly Irrigation Water Distribution between Upstream and Downstream Water Users in Water Shortage Periods
Authors: S. M. Hashemy Shahdany
Abstract:
Equitable water delivery becomes one of the main concerns for water authorities in arid regions. Due to water scarcity, providing reliable amount of water is not possible for most of the irrigation districts in arid regions. In this paper, water level difference control is applied to keep the water level errors equal in adjacent reaches. Distant downstream decentralized configurations of the control method are designed and tested under a realistic scenario shows canal operation under water shortage. The simulation results show that the difference controllers share the water level error among all of the users in a fair way. Therefore, water deficit has a similar influence on downstream as well as upstream and water offtakes.Keywords: equitable water distribution, precise agriculture, sustainable agriculture, water shortage
Procedia PDF Downloads 4666543 Optimal Planning of Dispatchable Distributed Generators for Power Loss Reduction in Unbalanced Distribution Networks
Authors: Mahmoud M. Othman, Y. G. Hegazy, A. Y. Abdelaziz
Abstract:
This paper proposes a novel heuristic algorithm that aims to determine the best size and location of distributed generators in unbalanced distribution networks. The proposed heuristic algorithm can deal with the planning cases where power loss is to be optimized without violating the system practical constraints. The distributed generation units in the proposed algorithm is modeled as voltage controlled node with the flexibility to be converted to constant power factor node in case of reactive power limit violation. The proposed algorithm is implemented in MATLAB and tested on the IEEE 37 -node feeder. The results obtained show the effectiveness of the proposed algorithm.Keywords: distributed generation, heuristic approach, optimization, planning
Procedia PDF Downloads 5266542 On the Grid Technique by Approximating the Derivatives of the Solution of the Dirichlet Problems for (1+1) Dimensional Linear Schrodinger Equation
Authors: Lawrence A. Farinola
Abstract:
Four point implicit schemes for the approximation of the first and pure second order derivatives for the solution of the Dirichlet problem for one dimensional Schrodinger equation with respect to the time variable t were constructed. Also, special four-point implicit difference boundary value problems are proposed for the first and pure second derivatives of the solution with respect to the spatial variable x. The Grid method is also applied to the mixed second derivative of the solution of the Linear Schrodinger time-dependent equation. It is assumed that the initial function belongs to the Holder space C⁸⁺ᵃ, 0 < α < 1, the Schrodinger wave function given in the Schrodinger equation is from the Holder space Cₓ,ₜ⁶⁺ᵃ, ³⁺ᵃ/², the boundary functions are from C⁴⁺ᵃ, and between the initial and the boundary functions the conjugation conditions of orders q = 0,1,2,3,4 are satisfied. It is proven that the solution of the proposed difference schemes converges uniformly on the grids of the order O(h²+ k) where h is the step size in x and k is the step size in time. Numerical experiments are illustrated to support the analysis made.Keywords: approximation of derivatives, finite difference method, Schrödinger equation, uniform error
Procedia PDF Downloads 1226541 A Proposal for an Excessivist Social Welfare Ordering
Authors: V. De Sandi
Abstract:
In this paper, we characterize a class of rank-weighted social welfare orderings that we call ”Excessivist.” The Excessivist Social Welfare Ordering (eSWO) judges incomes above a fixed threshold θ as detrimental to society. To accomplish this, the identification of a richness or affluence line is necessary. We employ a fixed, exogenous line of excess. We define an eSWF in the form of a weighted sum of individual’s income. This requires introducing n+1 vectors of weights, one for all possible numbers of individuals below the threshold. To do this, the paper introduces a slight modification of the class of rank weighted class of social welfare function. Indeed, in our excessivist social welfare ordering, we allow the weights to be both positive (for individuals below the line) and negative (for individuals above). Then, we introduce ethical concerns through an axiomatic approach. The following axioms are required: continuity above and below the threshold (Ca, Cb), anonymity (A), absolute aversion to excessive richness (AER), pigou dalton positive weights preserving transfer (PDwpT), sign rank preserving full comparability (SwpFC) and strong pareto below the threshold (SPb). Ca, Cb requires that small changes in two income distributions above and below θ do not lead to changes in their ordering. AER suggests that if two distributions are identical in any respect but for one individual above the threshold, who is richer in the first, then the second should be preferred by society. This means that we do not care about the waste of resources above the threshold; the priority is the reduction of excessive income. According to PDwpT, a transfer from a better-off individual to a worse-off individual despite their relative position to the threshold, without reversing their ranks, leads to an improved distribution if the number of individuals below the threshold is the same after the transfer or the number of individuals below the threshold has increased. SPb holds only for individuals below the threshold. The weakening of strong pareto and our ethics need to be justified; we support them through the notion of comparative egalitarianism and income as a source of power. SwpFC is necessary to ensure that, following a positive affine transformation, an individual does not become excessively rich in only one distribution, thereby reversing the ordering of the distributions. Given the axioms above, we can characterize the class of the eSWO, getting the following result through a proof by contradiction and exhaustion: Theorem 1. A social welfare ordering satisfies the axioms of continuity above and below the threshold, anonymity, sign rank preserving full comparability, aversion to excessive richness, Pigou Dalton positive weight preserving transfer, and strong pareto below the threshold, if and only if it is an Excessivist-social welfare ordering. A discussion about the implementation of different threshold lines reviewing the primary contributions in this field follows. What the commonly implemented social welfare functions have been overlooking is the concern for extreme richness at the top. The characterization of Excessivist Social Welfare Ordering, given the axioms above, aims to fill this gap.Keywords: comparative egalitarianism, excess income, inequality aversion, social welfare ordering
Procedia PDF Downloads 646540 Reductive Control in the Management of Redundant Actuation
Authors: Mkhinini Maher, Knani Jilani
Abstract:
We present in this work the performances of a mobile omnidirectional robot through evaluating its management of the redundancy of actuation. Thus we come to the predictive control implemented. The distribution of the wringer on the robot actions, through the inverse pseudo of Moore-Penrose, corresponds to a -geometric- distribution of efforts. We will show that the load on vehicle wheels would not be equi-distributed in terms of wheels configuration and of robot movement. Thus, the threshold of sliding is not the same for the three wheels of the vehicle. We suggest exploiting the redundancy of actuation to reduce the risk of wheels sliding and to ameliorate, thereby, its accuracy of displacement. This kind of approach was the subject of study for the legged robots.Keywords: mobile robot, actuation, redundancy, omnidirectional, inverse pseudo moore-penrose, reductive control
Procedia PDF Downloads 5136539 The Use of the Matlab Software as the Best Way to Recognize Penumbra Region in Radiotherapy
Authors: Alireza Shayegan, Morteza Amirabadi
Abstract:
The y tool was developed to quantitatively compare dose distributions, either measured or calculated. Before computing ɣ, the dose and distance scales of the two distributions, referred to as evaluated and reference, are re-normalized by dose and distance criteria, respectively. The re-normalization allows the dose distribution comparison to be conducted simultaneously along dose and distance axes. Several two-dimensional images were acquired using a Scanning Liquid Ionization Chamber EPID and Extended Dose Range (EDR2) films for regular and irregular radiation fields. The raw images were then converted into two-dimensional dose maps. Transitional and rotational manipulations were performed for images using Matlab software. As evaluated dose distribution maps, they were then compared with the corresponding original dose maps as the reference dose maps.Keywords: energetic electron, gamma function, penumbra, Matlab software
Procedia PDF Downloads 3016538 Sociolinguistic and Classroom Functions of Using Code-Switching in CLIL Context
Authors: Khatuna Buskivadze
Abstract:
The aim of the present study is to investigate the sociolinguistic and classroom functions and frequency of Teacher’s Code Switching (CS) in the Content and Language Integrated (CLIL) Lesson. Nowadays, Georgian society struggles to become the part of the European world, the English language itself plays a role in forming new generations with European values. Based on our research conducted in 2019, out of all 114 private schools in Tbilisi, full- programs of CLIL are taught in 7 schools, while only some subjects using CLIL are conducted in 3 schools. The goal of the former research was to define the features of Content and Language Integrated learning (CLIL) methodology within the process of teaching English on the Example of Georgian private high schools. Taking the Georgian reality and cultural features into account, the modified version of the questionnaire, based on the classification of using CS in ESL Classroom proposed By Ferguson (2009) was used. The qualitative research revealed students’ and teacher’s attitudes towards teacher’s code-switching in CLIL lesson. Both qualitative and quantitative research were conducted: the observations of the teacher’s lessons (Recording of T’s online lessons), interview and the questionnaire among Math’s T’s 20 high school students. We came to the several conclusions, some of them are given here: Math’s teacher’s CS behavior mostly serves (1) the conversational function of interjection; (2) the classroom functions of introducing unfamiliar materials and topics, explaining difficult concepts, maintaining classroom discipline and the structure of the lesson; The teacher and 13 students have negative attitudes towards using only Georgian in teaching Math. The higher level of English is the more negative is attitude towards using Georgian in the classroom. Although all the students were Georgian, their competence in English is higher than in Georgian, therefore they consider English as an inseparable part of their identities. The overall results of the case study of teaching Math (Educational discourse) in one of the private schools in Tbilisi will be presented at the conference.Keywords: attitudes, bilingualism, code-switching, CLIL, conversation analysis, interactional sociolinguistics.
Procedia PDF Downloads 1626537 Statistical Analysis for Overdispersed Medical Count Data
Authors: Y. N. Phang, E. F. Loh
Abstract:
Many researchers have suggested the use of zero inflated Poisson (ZIP) and zero inflated negative binomial (ZINB) models in modeling over-dispersed medical count data with extra variations caused by extra zeros and unobserved heterogeneity. The studies indicate that ZIP and ZINB always provide better fit than using the normal Poisson and negative binomial models in modeling over-dispersed medical count data. In this study, we proposed the use of Zero Inflated Inverse Trinomial (ZIIT), Zero Inflated Poisson Inverse Gaussian (ZIPIG) and zero inflated strict arcsine models in modeling over-dispersed medical count data. These proposed models are not widely used by many researchers especially in the medical field. The results show that these three suggested models can serve as alternative models in modeling over-dispersed medical count data. This is supported by the application of these suggested models to a real life medical data set. Inverse trinomial, Poisson inverse Gaussian, and strict arcsine are discrete distributions with cubic variance function of mean. Therefore, ZIIT, ZIPIG and ZISA are able to accommodate data with excess zeros and very heavy tailed. They are recommended to be used in modeling over-dispersed medical count data when ZIP and ZINB are inadequate.Keywords: zero inflated, inverse trinomial distribution, Poisson inverse Gaussian distribution, strict arcsine distribution, Pearson’s goodness of fit
Procedia PDF Downloads 5466536 Simultaneous Determination of Methotrexate and Aspirin Using Fourier Transform Convolution Emission Data under Non-Parametric Linear Regression Method
Authors: Marwa A. A. Ragab, Hadir M. Maher, Eman I. El-Kimary
Abstract:
Co-administration of methotrexate (MTX) and aspirin (ASP) can cause a pharmacokinetic interaction and a subsequent increase in blood MTX concentrations which may increase the risk of MTX toxicity. Therefore, it is important to develop a sensitive, selective, accurate and precise method for their simultaneous determination in urine. A new hybrid chemometric method has been applied to the emission response data of the two drugs. Spectrofluorimetric method for determination of MTX through measurement of its acid-degradation product, 4-amino-4-deoxy-10-methylpteroic acid (4-AMP), was developed. Moreover, the acid-catalyzed degradation reaction enables the spectrofluorimetric determination of ASP through the formation of its active metabolite salicylic acid (SA). The proposed chemometric method deals with convolution of emission data using 8-points sin xi polynomials (discrete Fourier functions) after the derivative treatment of these emission data. The first and second derivative curves (D1 & D2) were obtained first then convolution of these curves was done to obtain first and second derivative under Fourier functions curves (D1/FF) and (D2/FF). This new application was used for the resolution of the overlapped emission bands of the degradation products of both drugs to allow their simultaneous indirect determination in human urine. Not only this chemometric approach was applied to the emission data but also the obtained data were subjected to non-parametric linear regression analysis (Theil’s method). The proposed method was fully validated according to the ICH guidelines and it yielded linearity ranges as follows: 0.05-0.75 and 0.5-2.5 µg mL-1 for MTX and ASP respectively. It was found that the non-parametric method was superior over the parametric one in the simultaneous determination of MTX and ASP after the chemometric treatment of the emission spectra of their degradation products. The work combines the advantages of derivative and convolution using discrete Fourier function together with the reliability and efficacy of the non-parametric analysis of data. The achieved sensitivity along with the low values of LOD (0.01 and 0.06 µg mL-1) and LOQ (0.04 and 0.2 µg mL-1) for MTX and ASP respectively, by the second derivative under Fourier functions (D2/FF) were promising and guarantee its application for monitoring the two drugs in patients’ urine samples.Keywords: chemometrics, emission curves, derivative, convolution, Fourier transform, human urine, non-parametric regression, Theil’s method
Procedia PDF Downloads 4306535 Effectiveness of Self-Learning Module on the Academic Performance of Students in Statistics and Probability
Authors: Aneia Rajiel Busmente, Renato Gunio Jr., Jazin Mautante, Denise Joy Mendoza, Raymond Benedict Tagorio, Gabriel Uy, Natalie Quinn Valenzuela, Ma. Elayza Villa, Francine Yezha Vizcarra, Sofia Madelle Yapan, Eugene Kurt Yboa
Abstract:
COVID-19’s rapid spread caused a dramatic change in the nation, especially the educational system. The Department of Education was forced to adopt a practical learning platform without neglecting health, a printed modular distance learning. The Philippines' K–12 curriculum includes Statistics and Probability as one of the key courses as it offers students the knowledge to evaluate and comprehend data. Due to student’s difficulty and lack of understanding of the concepts of Statistics and Probability in Normal Distribution. The Self-Learning Module in Statistics and Probability about the Normal Distribution created by the Department of Education has several problems, including many activities, unclear illustrations, and insufficient examples of concepts which enables learners to have a difficulty accomplishing the module. The purpose of this study is to determine the effectiveness of self-learning module on the academic performance of students in the subject Statistics and Probability, it will also explore students’ perception towards the quality of created Self-Learning Module in Statistics and Probability. Despite the availability of Self-Learning Modules in Statistics and Probability in the Philippines, there are still few literatures that discuss its effectiveness in improving the performance of Senior High School students in Statistics and Probability. In this study, a Self-Learning Module on Normal Distribution is evaluated using a quasi-experimental design. STEM students in Grade 11 from National University's Nazareth School will be the study's participants, chosen by purposive sampling. Google Forms will be utilized to find at least 100 STEM students in Grade 11. The research instrument consists of 20-item pre- and post-test to assess participants' knowledge and performance regarding Normal Distribution, and a Likert scale survey to evaluate how the students perceived the self-learning module. Pre-test, post-test, and Likert scale surveys will be utilized to gather data, with Jeffreys' Amazing Statistics Program (JASP) software being used for analysis.Keywords: self-learning module, academic performance, statistics and probability, normal distribution
Procedia PDF Downloads 1156534 Optimal Design of Propellant Grain Shape Based on Structural Strength Analysis
Authors: Chen Xiong, Tong Xin, Li Hao, Xu Jin-Sheng
Abstract:
Experiment and simulation researches on the structural integrity of propellant grain in solid rocket motor (SRM) with high volumetric fraction were conducted. First, by using SRM parametric modeling functions with secondary development tool Python of ABAQUS, the three dimensional parameterized modeling programs of star shaped grain, wheel shaped grain and wing cylindrical grain were accomplished. Then, the mechanical properties under different loads for star shaped grain were obtained with the application of automatically established finite element model in ABAQUS. Next, several optimization algorithms are introduced to optimize the star shaped grain, wheel shaped grain and wing cylindrical grain. After meeting the demands of burning surface changes and volumetric fraction, the optimum three dimensional shapes of grain were obtained. Finally, by means of parametric modeling functions, pressure data of SRM’s cold pressurization test was directly applied to simulation of grain in terms of mechanical performance. The results verify the reliability and practical of parameterized modeling program of SRM.Keywords: cold pressurization test, ğarametric modeling, structural integrity, propellant grain, SRM
Procedia PDF Downloads 3626533 Optimal Design of Step-Stress Partially Life Test Using Multiply Censored Exponential Data with Random Removals
Authors: Showkat Ahmad Lone, Ahmadur Rahman, Ariful Islam
Abstract:
The major assumption in accelerated life tests (ALT) is that the mathematical model relating the lifetime of a test unit and the stress are known or can be assumed. In some cases, such life–stress relationships are not known and cannot be assumed, i.e. ALT data cannot be extrapolated to use condition. So, in such cases, partially accelerated life test (PALT) is a more suitable test to be performed for which tested units are subjected to both normal and accelerated conditions. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests using progressive failure-censored hybrid data with random removals. The life data of the units under test is considered to follow exponential life distribution. The removals from the test are assumed to have binomial distributions. The point and interval maximum likelihood estimations are obtained for unknown distribution parameters and tampering coefficient. An optimum test plan is developed using the D-optimality criterion. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.Keywords: binomial distribution, d-optimality, multiple censoring, optimal design, partially accelerated life testing, simulation study
Procedia PDF Downloads 3226532 Effectiveness of Computer-Based Cognitive Training in Improving Attention-Deficit/Hyperactivity Disorder Rehabilitation
Authors: Marjan Ghazisaeedi, Azadeh Bashiri
Abstract:
Background: Attention-Deficit/Hyperactivity Disorder(ADHD), is one of the most common psychiatric disorders in early childhood that in addition to its main symptoms provide significant deficits in the areas of educational, social and individual relationship. Considering the importance of rehabilitation in ADHD patients to control these problems, this study investigated the advantages of computer-based cognitive training in these patients. Methods: This review article has been conducted by searching articles since 2005 in scientific databases and e-Journals and by using keywords including computerized cognitive rehabilitation, computer-based training and ADHD. Results: Since drugs have short term effects and also they have many side effects in the rehabilitation of ADHD patients, using supplementary methods such as computer-based cognitive training is one of the best solutions. This approach has quick feedback and also has no side effects. So, it provides promising results in cognitive rehabilitation of ADHD especially on the working memory and attention. Conclusion: Considering different cognitive dysfunctions in ADHD patients, application of the computerized cognitive training has the potential to improve cognitive functions and consequently social, academic and behavioral performances in patients with this disorder.Keywords: ADHD, computer-based cognitive training, cognitive functions, rehabilitation
Procedia PDF Downloads 2796531 Quantile Smoothing Splines: Application on Productivity of Enterprises
Authors: Semra Turkan
Abstract:
In this paper, we have examined the factors that affect the productivity of Turkey’s Top 500 Industrial Enterprises in 2014. The labor productivity of enterprises is taken as an indicator of productivity of industrial enterprises. When the relationships between some financial ratios and labor productivity, it is seen that there is a nonparametric relationship between labor productivity and return on sales. In addition, the distribution of labor productivity of enterprises is right-skewed. If the dependent distribution is skewed, the quantile regression is more suitable for this data. Hence, the nonparametric relationship between labor productivity and return on sales by quantile smoothing splines.Keywords: quantile regression, smoothing spline, labor productivity, financial ratios
Procedia PDF Downloads 3026530 Optimum Stratification of a Skewed Population
Authors: D. K. Rao, M. G. M. Khan, K. G. Reddy
Abstract:
The focus of this paper is to develop a technique of solving a combined problem of determining Optimum Strata Boundaries (OSB) and Optimum Sample Size (OSS) of each stratum, when the population understudy is skewed and the study variable has a Pareto frequency distribution. The problem of determining the OSB is formulated as a Mathematical Programming Problem (MPP) which is then solved by dynamic programming technique. A numerical example is presented to illustrate the computational details of the proposed method. The proposed technique is useful to obtain OSB and OSS for a Pareto type skewed population, which minimizes the variance of the estimate of population mean.Keywords: stratified sampling, optimum strata boundaries, optimum sample size, pareto distribution, mathematical programming problem, dynamic programming technique
Procedia PDF Downloads 4556529 Application of Neutron Stimulated Gamma Spectroscopy for Soil Elemental Analysis and Mapping
Authors: Aleksandr Kavetskiy, Galina Yakubova, Nikolay Sargsyan, Stephen A. Prior, H. Allen Torbert
Abstract:
Determining soil elemental content and distribution (mapping) within a field are key features of modern agricultural practice. While traditional chemical analysis is a time consuming and labor-intensive multi-step process (e.g., sample collections, transport to laboratory, physical preparations, and chemical analysis), neutron-gamma soil analysis can be performed in-situ. This analysis is based on the registration of gamma rays issued from nuclei upon interaction with neutrons. Soil elements such as Si, C, Fe, O, Al, K, and H (moisture) can be assessed with this method. Data received from analysis can be directly used for creating soil elemental distribution maps (based on ArcGIS software) suitable for agricultural purposes. The neutron-gamma analysis system developed for field application consisted of an MP320 Neutron Generator (Thermo Fisher Scientific, Inc.), 3 sodium iodide gamma detectors (SCIONIX, Inc.) with a total volume of 7 liters, 'split electronics' (XIA, LLC), a power system, and an operational computer. Paired with GPS, this system can be used in the scanning mode to acquire gamma spectra while traversing a field. Using acquired spectra, soil elemental content can be calculated. These data can be combined with geographical coordinates in a geographical information system (i.e., ArcGIS) to produce elemental distribution maps suitable for agricultural purposes. Special software has been developed that will acquire gamma spectra, process and sort data, calculate soil elemental content, and combine these data with measured geographic coordinates to create soil elemental distribution maps. For example, 5.5 hours was needed to acquire necessary data for creating a carbon distribution map of an 8.5 ha field. This paper will briefly describe the physics behind the neutron gamma analysis method, physical construction the measurement system, and main characteristics and modes of work when conducting field surveys. Soil elemental distribution maps resulting from field surveys will be presented. and discussed. Comparison of these maps with maps created on the bases of chemical analysis and soil moisture measurements determined by soil electrical conductivity was similar. The maps created by neutron-gamma analysis were reproducible, as well. Based on these facts, it can be asserted that neutron stimulated soil gamma spectroscopy paired with GPS system is fully applicable for soil elemental agricultural field mapping.Keywords: ArcGIS mapping, neutron gamma analysis, soil elemental content, soil gamma spectroscopy
Procedia PDF Downloads 1346528 3D Liver Segmentation from CT Images Using a Level Set Method Based on a Shape and Intensity Distribution Prior
Authors: Nuseiba M. Altarawneh, Suhuai Luo, Brian Regan, Guijin Tang
Abstract:
Liver segmentation from medical images poses more challenges than analogous segmentations of other organs. This contribution introduces a liver segmentation method from a series of computer tomography images. Overall, we present a novel method for segmenting liver by coupling density matching with shape priors. Density matching signifies a tracking method which operates via maximizing the Bhattacharyya similarity measure between the photometric distribution from an estimated image region and a model photometric distribution. Density matching controls the direction of the evolution process and slows down the evolving contour in regions with weak edges. The shape prior improves the robustness of density matching and discourages the evolving contour from exceeding liver’s boundaries at regions with weak boundaries. The model is implemented using a modified distance regularized level set (DRLS) model. The experimental results show that the method achieves a satisfactory result. By comparing with the original DRLS model, it is evident that the proposed model herein is more effective in addressing the over segmentation problem. Finally, we gauge our performance of our model against matrices comprising of accuracy, sensitivity and specificity.Keywords: Bhattacharyya distance, distance regularized level set (DRLS) model, liver segmentation, level set method
Procedia PDF Downloads 3146527 Dynamic Performance Analysis of Distribution/ Sub-Transmission Networks with High Penetration of PV Generation
Authors: Cristian F.T. Montenegro, Luís F. N. Lourenço, Maurício B. C. Salles, Renato M. Monaro
Abstract:
More PV systems have been connected to the electrical network each year. As the number of PV systems increases, some issues affecting grid operations have been identified. This paper studied the impacts related to changes in solar irradiance on a distribution/sub-transmission network, considering variations due to moving clouds and daily cycles. Using MATLAB/Simulink software, a solar farm of 30 MWp was built and then implemented to a test network. From simulations, it has been determined that irradiance changes can have a significant impact on the grid by causing voltage fluctuations outside the allowable thresholds. This work discussed some local control strategies and grid reinforcements to mitigate the negative effects of the irradiance changes on the grid.Keywords: reactive power control, solar irradiance, utility-scale PV systems, voltage fluctuations
Procedia PDF Downloads 4616526 The Simulation and Experimental Investigation to Study the Strain Distribution Pattern during the Closed Die Forging Process
Authors: D. B. Gohil
Abstract:
Closed die forging is a very complex process, and measurement of actual forces for real material is difficult and time consuming. Hence, the modelling technique has taken the advantage of carrying out the experimentation with the proper model material which needs lesser forces and relatively low temperature. The results of experiments on the model material then may be correlated with the actual material by using the theory of similarity. There are several methods available to resolve the complexity involved in the closed die forging process. Finite Element Method (FEM) and Finite Difference Method (FDM) are relatively difficult as compared to the slab method. The slab method is very popular and very widely used by the people working on shop floor because it is relatively easy to apply and reasonably accurate for most of the common forging load requirement computations.Keywords: experimentation, forging, process modeling, strain distribution
Procedia PDF Downloads 2016525 Predicting the Effect of Vibro Stone Column Installation on Performance of Reinforced Foundations
Authors: K. Al Ammari, B. G. Clarke
Abstract:
Soil improvement using vibro stone column techniques consists of two main parts: (1) the installed load bearing columns of well-compacted, coarse-grained material and (2) the improvements to the surrounding soil due to vibro compaction. Extensive research work has been carried out over the last 20 years to understand the improvement in the composite foundation performance due to the second part mentioned above. Nevertheless, few of these studies have tried to quantify some of the key design parameters, namely the changes in the stiffness and stress state of the treated soil, or have consider these parameters in the design and calculation process. Consequently, empirical and conservative design methods are still being used by ground improvement companies with a significant variety of results in engineering practice. Two-dimensional finite element study to develop an axisymmetric model of a single stone column reinforced foundation was performed using PLAXIS 2D AE to quantify the effect of the vibro installation of this column in soft saturated clay. Settlement and bearing performance were studied as an essential part of the design and calculation of the stone column foundation. Particular attention was paid to the large deformation in the soft clay around the installed column caused by the lateral expansion. So updated mesh advanced option was taken in the analysis. In this analysis, different degrees of stone column lateral expansions were simulated and numerically analyzed, and then the changes in the stress state, stiffness, settlement performance and bearing capacity were quantified. It was found that application of radial expansion will produce a horizontal stress in the soft clay mass that gradually decrease as the distance from the stone column axis increases. The excess pore pressure due to the undrained conditions starts to dissipate immediately after finishing the column installation, allowing the horizontal stress to relax. Changes in the coefficient of the lateral earth pressure K ٭, which is very important in representing the stress state, and the new stiffness distribution in the reinforced clay mass, were estimated. More encouraging results showed that increasing the expansion during column installation has a noticeable effect on improving the bearing capacity and reducing the settlement of reinforced ground, So, a design method should include this significant effect of the applied lateral displacement during the stone column instillation in simulation and numerical analysis design.Keywords: bearing capacity, design, installation, numerical analysis, settlement, stone column
Procedia PDF Downloads 3756524 Wireless Sensor Network for Forest Fire Detection and Localization
Authors: Tarek Dandashi
Abstract:
WSNs may provide a fast and reliable solution for the early detection of environment events like forest fires. This is crucial for alerting and calling for fire brigade intervention. Sensor nodes communicate sensor data to a host station, which enables a global analysis and the generation of a reliable decision on a potential fire and its location. A WSN with TinyOS and nesC for the capturing and transmission of a variety of sensor information with controlled source, data rates, duration, and the records/displaying activity traces is presented. We propose a similarity distance (SD) between the distribution of currently sensed data and that of a reference. At any given time, a fire causes diverging opinions in the reported data, which alters the usual data distribution. Basically, SD consists of a metric on the Cumulative Distribution Function (CDF). SD is designed to be invariant versus day-to-day changes of temperature, changes due to the surrounding environment, and normal changes in weather, which preserve the data locality. Evaluation shows that SD sensitivity is quadratic versus an increase in sensor node temperature for a group of sensors of different sizes and neighborhood. Simulation of fire spreading when ignition is placed at random locations with some wind speed shows that SD takes a few minutes to reliably detect fires and locate them. We also discuss the case of false negative and false positive and their impact on the decision reliability.Keywords: forest fire, WSN, wireless sensor network, algortihm
Procedia PDF Downloads 2636523 Optimizing Mechanical Behavior of Middle Ear Prosthesis Using Finite Element Method with Material Degradation Functionally Graded Materials in Three Functions
Authors: Khatir Omar, Fekih Sidi Mohamed, Sahli Abderahmene, Benkhettou Abdelkader, Boudjemaa Ismail
Abstract:
Advancements in technology have revolutionized healthcare, with notable impacts on auditory health. This study introduces an approach aimed at optimizing materials for middle ear prostheses to enhance auditory performance. We have developed a finite element (FE) model of the ear incorporating a pure titanium TORP prosthesis, validated against experimental data. Subsequently, we applied the Functionally Graded Materials (FGM) methodology, utilizing linear, exponential, and logarithmic degradation functions to modify prosthesis materials. Biocompatible materials suitable for auditory prostheses, including Stainless Steel, titanium, and Hydroxyapatite, were investigated. The findings indicate that combinations such as Stainless Steel with titanium and Hydroxyapatite offer improved outcomes compared to pure titanium and Hydroxyapatite ceramic in terms of both displacement and stress. Additionally, personalized prostheses tailored to individual patient needs are feasible, underscoring the potential for further advancements in auditory healthcare.Keywords: middle ear, prosthesis, ossicles, FGM, vibration analysis, finite-element method
Procedia PDF Downloads 876522 Application of Optimization Techniques in Overcurrent Relay Coordination: A Review
Authors: Syed Auon Raza, Tahir Mahmood, Syed Basit Ali Bukhari
Abstract:
In power system properly coordinated protection scheme is designed to make sure that only the faulty part of the system will be isolated when abnormal operating condition of the system will reach. The complexity of the system as well as the increased user demand and the deregulated environment enforce the utilities to improve system reliability by using a properly coordinated protection scheme. This paper presents overview of over current relay coordination techniques. Different techniques such as Deterministic Techniques, Meta Heuristic Optimization techniques, Hybrid Optimization Techniques, and Trial and Error Optimization Techniques have been reviewed in terms of method of their implementation, operation modes, nature of distribution system, and finally their advantages as well as the disadvantages.Keywords: distribution system, relay coordination, optimization, Plug Setting Multiplier (PSM)
Procedia PDF Downloads 3996521 An Improved Method on Static Binary Analysis to Enhance the Context-Sensitive CFI
Authors: Qintao Shen, Lei Luo, Jun Ma, Jie Yu, Qingbo Wu, Yongqi Ma, Zhengji Liu
Abstract:
Control Flow Integrity (CFI) is one of the most promising technique to defend Code-Reuse Attacks (CRAs). Traditional CFI Systems and recent Context-Sensitive CFI use coarse control flow graphs (CFGs) to analyze whether the control flow hijack occurs, left vast space for attackers at indirect call-sites. Coarse CFGs make it difficult to decide which target to execute at indirect control-flow transfers, and weaken the existing CFI systems actually. It is an unsolved problem to extract CFGs precisely and perfectly from binaries now. In this paper, we present an algorithm to get a more precise CFG from binaries. Parameters are analyzed at indirect call-sites and functions firstly. By comparing counts of parameters prepared before call-sites and consumed by functions, targets of indirect calls are reduced. Then the control flow would be more constrained at indirect call-sites in runtime. Combined with CCFI, we implement our policy. Experimental results on some popular programs show that our approach is efficient. Further analysis show that it can mitigate COOP and other advanced attacks.Keywords: contex-sensitive, CFI, binary analysis, code reuse attack
Procedia PDF Downloads 3236520 Digital Twin of Real Electrical Distribution System with Real Time Recursive Load Flow Calculation and State Estimation
Authors: Anosh Arshad Sundhu, Francesco Giordano, Giacomo Della Croce, Maurizio Arnone
Abstract:
Digital Twin (DT) is a technology that generates a virtual representation of a physical system or process, enabling real-time monitoring, analysis, and simulation. DT of an Electrical Distribution System (EDS) can perform online analysis by integrating the static and real-time data in order to show the current grid status and predictions about the future status to the Distribution System Operator (DSO), producers and consumers. DT technology for EDS also offers the opportunity to DSO to test hypothetical scenarios. This paper discusses the development of a DT of an EDS by Smart Grid Controller (SGC) application, which is developed using open-source libraries and languages. The developed application can be integrated with Supervisory Control and Data Acquisition System (SCADA) of any EDS for creating the DT. The paper shows the performance of developed tools inside the application, tested on real EDS for grid observability, Smart Recursive Load Flow (SRLF) calculation and state estimation of loads in MV feeders.Keywords: digital twin, distributed energy resources, remote terminal units, supervisory control and data acquisition system, smart recursive load flow
Procedia PDF Downloads 111