Search results for: instrumental variable estimation
3739 Understanding the Classification of Rain Microstructure and Estimation of Z-R Relationship using a Micro Rain Radar in Tropical Region
Authors: Tomiwa, Akinyemi Clement
Abstract:
Tropical regions experience diverse and complex precipitation patterns, posing significant challenges for accurate rainfall estimation and forecasting. This study addresses the problem of effectively classifying tropical rain types and refining the Z-R (Reflectivity-Rain Rate) relationship to enhance rainfall estimation accuracy. Through a combination of remote sensing, meteorological analysis, and machine learning, the research aims to develop an advanced classification framework capable of distinguishing between different types of tropical rain based on their unique characteristics. This involves utilizing high-resolution satellite imagery, radar data, and atmospheric parameters to categorize precipitation events into distinct classes, providing a comprehensive understanding of tropical rain systems. Additionally, the study seeks to improve the Z-R relationship, a crucial aspect of rainfall estimation. One year of rainfall data was analyzed using a Micro Rain Radar (MRR) located at The Federal University of Technology Akure, Nigeria, measuring rainfall parameters from ground level to a height of 4.8 km with a vertical resolution of 0.16 km. Rain rates were classified into low (stratiform) and high (convective) based on various microstructural attributes such as rain rates, liquid water content, Drop Size Distribution (DSD), average fall speed of the drops, and radar reflectivity. By integrating diverse datasets and employing advanced statistical techniques, the study aims to enhance the precision of Z-R models, offering a more reliable means of estimating rainfall rates from radar reflectivity data. This refined Z-R relationship holds significant potential for improving our understanding of tropical rain systems and enhancing forecasting accuracy in regions prone to heavy precipitation.Keywords: remote sensing, precipitation, drop size distribution, micro rain radar
Procedia PDF Downloads 403738 Production and Leftovers Usage Policies to Minimize Food Waste under Uncertain and Correlated Demand
Authors: Esma Birisci, Ronald McGarvey
Abstract:
One of the common problems in food service industry is demand uncertainty. This research presents a multi-criteria optimization approach to identify the efficient frontier of points lying between the minimum-waste and minimum-shortfall solutions within uncertain demand environment. It also addresses correlation across demands for items (e.g., hamburgers are often demanded with french fries). Reducing overproduction food waste (and its corresponding environmental impacts) and an aversion to shortfalls (leave some customer hungry) need to consider as two contradictory objectives in an all-you-care-to-eat environment food service operation. We identify optimal production adjustments relative to demand forecasts, demand thresholds for utilization of leftovers, and percentages of demand to be satisfied by leftovers, considering two alternative metrics for overproduction waste: mass; and greenhouse gas emissions. Demand uncertainty and demand correlations are addressed using a kernel density estimation approach. A statistical analysis of the changes in decision variable values across each of the efficient frontiers can then be performed to identify the key variables that could be modified to reduce the amount of wasted food at minimal increase in shortfalls. We illustrate our approach with an application to empirical data from Campus Dining Services operations at the University of Missouri.Keywords: environmental studies, food waste, production planning, uncertain and correlated demand
Procedia PDF Downloads 3743737 Annular Hyperbolic Profile Fins with Variable Thermal Conductivity Using Laplace Adomian Transform and Double Decomposition Methods
Authors: Yinwei Lin, Cha'o-Kuang Chen
Abstract:
In this article, the Laplace Adomian transform method (LADM) and double decomposition method (DDM) are used to solve the annular hyperbolic profile fins with variable thermal conductivity. As the thermal conductivity parameter ε is relatively large, the numerical solution using DDM become incorrect. Moreover, when the terms of DDM are more than seven, the numerical solution using DDM is very complicated. However, the present method can be easily calculated as terms are over seven and has more precisely numerical solutions. As the thermal conductivity parameter ε is relatively large, LADM also has better accuracy than DDM.Keywords: fins, thermal conductivity, Laplace transform, Adomian, nonlinear
Procedia PDF Downloads 3363736 Comparison of Spiral Circular Coil and Helical Coil Structures for Wireless Power Transfer System
Authors: Zhang Kehan, Du Luona
Abstract:
Wireless power transfer (WPT) systems have been widely investigated for advantages of convenience and safety compared to traditional plug-in charging systems. The research contents include impedance matching, circuit topology, transfer distance et al. for improving the efficiency of WPT system, which is a decisive factor in the practical application. What is more, coil structures such as spiral circular coil and helical coil with variable distance between two turns also have indispensable effects on the efficiency of WPT systems. This paper compares the efficiency of WPT systems utilizing spiral or helical coil with variable distance between two turns, and experimental results show that efficiency of spiral circular coil with an optimum distance between two turns is the highest. According to efficiency formula of resonant WPT system with series-series topology, we introduce M²/R₋₁ to measure the efficiency of spiral circular coil and helical coil WPT system. If the distance between two turns s is too close, proximity effect theory shows that the induced current in the conductor, caused by a variable flux created by the current flows in the skin of vicinity conductor, is the opposite direction of source current and has assignable impart on coil resistance. Thus in two coil structures, s affects coil resistance. At the same time, when the distance between primary and secondary coils is not variable, s can also make the influence on M to some degrees. The aforementioned study proves that s plays an indispensable role in changing M²/R₋₁ and then can be adjusted to find the optimum value with which WPT system achieves the highest efficiency. In actual application situations of WPT systems especially in underwater vehicles, miniaturization is one vital issue in designing WPT system structures. Limited by system size, the largest external radius of spiral circular coil is 100 mm, and the largest height of helical coil is 40 mm. In other words, the turn of coil N changes with s. In spiral circular and helical structures, the distance between each two turns in secondary coil is set as a constant value 1 mm to guarantee that the R2 is not variable. Based on the analysis above, we set up spiral circular coil and helical coil model using COMSOL to analyze the value of M²/R₋₁ when the distance between each two turns in primary coil sp varies from 0 mm to 10 mm. In the two structure models, the distance between primary and secondary coils is 50 mm and wire diameter is chosen as 1.5 mm. The turn of coil in secondary coil are 27 in helical coil model and 20 in spiral circular coil model. The best value of s in helical coil structure and spiral circular coil structure are 1 mm and 2 mm respectively, in which the value of M²/R₋₁ is the largest. It is obviously to select spiral circular coil as the first choice to design the WPT system for that the value of M²/R₋₁ in spiral circular coil is larger than that in helical coil under the same condition.Keywords: distance between two turns, helical coil, spiral circular coil, wireless power transfer
Procedia PDF Downloads 3493735 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images
Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir
Abstract:
The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement; On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.Keywords: altitude estimation, drone, image processing, trajectory planning
Procedia PDF Downloads 1133734 The Intention to Use E-Money Transaction: The Moderating Effect of Security in Conceptual Frammework
Authors: Husnil Khatimah, Fairol Halim
Abstract:
This research examines the moderating impact of security on intention to use e-money that adapted from some variables of the TAM (Technology Acceptance Model) and TPB (Theory of Planned Behavior). This study will use security as moderating variable and finds these relationship depends on customer intention to use e-money as payment tools. The conceptual framework of e-money transactions was reviewed to understand behavioral intention of consumers from perceived usefulness, perceived ease of use, perceived behavioral control and security. Quantitative method will be utilized as sources of data collection. A total of one thousand respondents will be selected using quota sampling method in Medan, Indonesia. Descriptive analysis and Multiple Regression analysis will be conducted to analyze the data. The article ended with suggestion for future studies.Keywords: e-money transaction, TAM & TPB, moderating variable, behavioral intention, conceptual paper
Procedia PDF Downloads 4553733 Monitoring of Sustainability of Extruded Soya Product TRADKON SPC-TEX in Order to Define Expiration Date
Authors: Radovan Čobanović, Milica Rankov Šicar
Abstract:
New attitudes about nutrition impose new styles, and therefore a neNew attitudes about nutrition impose new styles, and therefore a new kind of food. The goal of our work was to define the shelf life of new extruded soya product with minimum 65% of protein based on the analyses. According to the plan it was defined that a certain quantity of the same batch of new product (soybean flakes) which had predicted shelf life of 2 years had to be stored for 24 months in storage and analyzed at the beginning and end of sustainability plan on instrumental analyses (heavy metals, pesticides and mycotoxins) and every month on sensory analyses (odor, taste, color, consistency), microbiological analyses (Salmonella spp., Escherichia coli, Enterobacteriaceae, sulfite-reducing clostridia, Listeria monocytogenes), chemical analyses (protein, ash, fat, crude cellulose, granulation) and at the beginning on GMO analyses. All analyses were tested according to: sensory analyses ISO 6658, Salmonella spp ISO 6579, Escherichia coli ISO 16649-2, Enterobacteriaceae ISO 21528-2, sulfite-reducing clostridia ISO 15213 and Listeria monocytogenes ISO 11290-2, chemical and instrumental analyses Serbian ordinance on the methods of physico-chemical analyses and GMO analyses JRC Compendium. The results obtained after the analyses which were done according to the plan during the 24 months indicate that are no changes of products concerning both sensory and chemical analyses. As far as microbiological results are concerned Salmonella spp was not detected and all other quantitative analyses showed values <10 cfu/g. The other parameters for food safety (heavy metals, pesticides and mycotoxins) were not present in analyzed samples and also all analyzed samples were negative concerning genetic testing. On the basis of monitoring the sample under defined storage conditions and analyses of quality control, GMO analyses and food safety of the sample during the shelf within two years, the results showed that all the parameters of the sample during defined period is in accordance with Serbian regulative so that indicate that predicted shelf life can be adopted.w kind of food. The goal of our work was to define the shelf life of new extruded soya product with minimum 65% of protein based on the analyses. According to the plan it was defined that a certain quantity of the same batch of new product (soybean flakes) which had predicted shelf life of 2 years had to be stored for 24 months in storage and analyzed at the beginning and end of sustainability plan on instrumental analyses (heavy metals, pesticides and mycotoxins) and every month on sensory analyses (odor, taste, color, consistency), microbiological analyses (Salmonella spp., Escherichia coli, Enterobacteriaceae, sulfite-reducin clostridia, Listeria monocytogenes), chemical analyses (protein, ash, fat, crude cellulose, granulation) and at the beginning on GMO analyses. All analyses were tested according: sensory analyses ISO 6658, Salmonella spp ISO 6579, Escherichia coli ISO 16649-2, Enterobacteriaceae ISO 21528-2, sulfite-reducing clostridia ISO 15213 and Listeria monocytogenes ISO 11290-2, chemical and instrumental analyses Serbian ordinance on the methods of physico-chemical analyses and GMO analyses JRC Compendium. The results obtained after the analyses which were done according to the plan during the 24 months indicate that are no changes of products concerning both sensory and chemical analyses. As far as microbiological results are concerned Salmonella spp was not detected and all other quantitative analyses showed values <10 cfu/g. The other parameters for food safety (heavy metals, pesticides and mycotoxins) were not present in analyzed samples and also all analyzed samples were negative concerning genetic testing. On the basis of monitoring the sample under defined storage conditions and analyses of quality control, GMO analyses and food safety of the sample during the shelf within two years, the results showed that all the parameters of the sample during defined period is in accordance with Serbian regulative so that indicate that predicted shelf life can be adopted.Keywords: extruded soya product, food safety analyses, GMO analyses, shelf life
Procedia PDF Downloads 2963732 Non-Linear Regression Modeling for Composite Distributions
Authors: Mostafa Aminzadeh, Min Deng
Abstract:
Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions
Procedia PDF Downloads 363731 A Nonstandard Finite Difference Method for Weather Derivatives Pricing Model
Authors: Clarinda Vitorino Nhangumbe, Fredericks Ebrahim, Betuel Canhanga
Abstract:
The price of an option weather derivatives can be approximated as a solution of the two-dimensional convection-diffusion dominant partial differential equation derived from the Ornstein-Uhlenbeck process, where one variable represents the weather dynamics and the other variable represent the underlying weather index. With appropriate financial boundary conditions, the solution of the pricing equation is approximated using a nonstandard finite difference method. It is shown that the proposed numerical scheme preserves positivity as well as stability and consistency. In order to illustrate the accuracy of the method, the numerical results are compared with other methods. The model is tested for real weather data.Keywords: nonstandard finite differences, Ornstein-Uhlenbeck process, partial differential equations approach, weather derivatives
Procedia PDF Downloads 1133730 A Semiparametric Approach to Estimate the Mode of Continuous Multivariate Data
Authors: Tiee-Jian Wu, Chih-Yuan Hsu
Abstract:
Mode estimation is an important task, because it has applications to data from a wide variety of sources. We propose a semi-parametric approach to estimate the mode of an unknown continuous multivariate density function. Our approach is based on a weighted average of a parametric density estimate using the Box-Cox transform and a non-parametric kernel density estimate. Our semi-parametric mode estimate improves both the parametric- and non-parametric- mode estimates. Specifically, our mode estimate solves the non-consistency problem of parametric mode estimates (at large sample sizes) and reduces the variability of non-parametric mode estimates (at small sample sizes). The performance of our method at practical sample sizes is demonstrated by simulation examples and two real examples from the fields of climatology and image recognition.Keywords: Box-Cox transform, density estimation, mode seeking, semiparametric method
Procedia PDF Downloads 2853729 Lipschitz Classifiers Ensembles: Usage for Classification of Target Events in C-OTDR Monitoring Systems
Authors: Andrey V. Timofeev
Abstract:
This paper introduces an original method for guaranteed estimation of the accuracy of an ensemble of Lipschitz classifiers. The solution was obtained as a finite closed set of alternative hypotheses, which contains an object of classification with a probability of not less than the specified value. Thus, the classification is represented by a set of hypothetical classes. In this case, the smaller the cardinality of the discrete set of hypothetical classes is, the higher is the classification accuracy. Experiments have shown that if the cardinality of the classifiers ensemble is increased then the cardinality of this set of hypothetical classes is reduced. The problem of the guaranteed estimation of the accuracy of an ensemble of Lipschitz classifiers is relevant in the multichannel classification of target events in C-OTDR monitoring systems. Results of suggested approach practical usage to accuracy control in C-OTDR monitoring systems are present.Keywords: Lipschitz classifiers, confidence set, C-OTDR monitoring, classifiers accuracy, classifiers ensemble
Procedia PDF Downloads 4933728 Application of Principal Component Analysis and Ordered Logit Model in Diabetic Kidney Disease Progression in People with Type 2 Diabetes
Authors: Mequanent Wale Mekonen, Edoardo Otranto, Angela Alibrandi
Abstract:
Diabetic kidney disease is one of the main microvascular complications caused by diabetes. Several clinical and biochemical variables are reported to be associated with diabetic kidney disease in people with type 2 diabetes. However, their interrelations could distort the effect estimation of these variables for the disease's progression. The objective of the study is to determine how the biochemical and clinical variables in people with type 2 diabetes are interrelated with each other and their effects on kidney disease progression through advanced statistical methods. First, principal component analysis was used to explore how the biochemical and clinical variables intercorrelate with each other, which helped us reduce a set of correlated biochemical variables to a smaller number of uncorrelated variables. Then, ordered logit regression models (cumulative, stage, and adjacent) were employed to assess the effect of biochemical and clinical variables on the order-level response variable (progression of kidney function) by considering the proportionality assumption for more robust effect estimation. This retrospective cross-sectional study retrieved data from a type 2 diabetic cohort in a polyclinic hospital at the University of Messina, Italy. The principal component analysis yielded three uncorrelated components. These are principal component 1, with negative loading of glycosylated haemoglobin, glycemia, and creatinine; principal component 2, with negative loading of total cholesterol and low-density lipoprotein; and principal component 3, with negative loading of high-density lipoprotein and a positive load of triglycerides. The ordered logit models (cumulative, stage, and adjacent) showed that the first component (glycosylated haemoglobin, glycemia, and creatinine) had a significant effect on the progression of kidney disease. For instance, the cumulative odds model indicated that the first principal component (linear combination of glycosylated haemoglobin, glycemia, and creatinine) had a strong and significant effect on the progression of kidney disease, with an effect or odds ratio of 0.423 (P value = 0.000). However, this effect was inconsistent across levels of kidney disease because the first principal component did not meet the proportionality assumption. To address the proportionality problem and provide robust effect estimates, alternative ordered logit models, such as the partial cumulative odds model, the partial adjacent category model, and the partial continuation ratio model, were used. These models suggested that clinical variables such as age, sex, body mass index, medication (metformin), and biochemical variables such as glycosylated haemoglobin, glycemia, and creatinine have a significant effect on the progression of kidney disease.Keywords: diabetic kidney disease, ordered logit model, principal component analysis, type 2 diabetes
Procedia PDF Downloads 423727 Smart Energy Consumers: An Empirical Investigation on the Intention to Adopt Innovative Consumption Behaviour
Authors: Cecilia Perri, Vincenzo Corvello
Abstract:
The aim of the present study is to investigate consumers' determinants of intention toward the adoption of Smart Grid solutions and technologies. Ajzen's Theory of Planned Behaviour (TPB) model is applied and tested to explain the formation of such adoption intention. An exogenous variable, taking into account the resistance to change of individuals, was added to the basic model. The elicitation study allowed obtaining salient modal beliefs, which were used, with the support of literature, to design the questionnaire. After the screening phase, data collected from the main survey were analysed for evaluating measurement model's reliability and validity. Consistent with the theory, the results of structural equation analysis revealed that attitude, subjective norm, and perceived behavioural control positively, which affected the adoption intention. Specifically, the variable with the highest estimate loading factor was found to be the perceived behavioural control, and, the most important belief related to each construct was determined (e.g., energy saving was observed to be the most significant belief linked with attitude). Further investigation indicated that the added exogenous variable has a negative influence on intention; this finding confirmed partially the hypothesis, since this influence was indirect: such relationship was mediated by attitude. Implications and suggestions for future research are discussed.Keywords: adoption of innovation, consumers behaviour, energy management, smart grid, theory of planned behaviour
Procedia PDF Downloads 4093726 GIS Application in Surface Runoff Estimation for Upper Klang River Basin, Malaysia
Authors: Suzana Ramli, Wardah Tahir
Abstract:
Estimation of surface runoff depth is a vital part in any rainfall-runoff modeling. It leads to stream flow calculation and later predicts flood occurrences. GIS (Geographic Information System) is an advanced and opposite tool used in simulating hydrological model due to its realistic application on topography. The paper discusses on calculation of surface runoff depth for two selected events by using GIS with Curve Number method for Upper Klang River basin. GIS enables maps intersection between soil type and land use that later produces curve number map. The results show good correlation between simulated and observed values with more than 0.7 of R2. Acceptable performance of statistical measurements namely mean error, absolute mean error, RMSE, and bias are also deduced in the paper.Keywords: surface runoff, geographic information system, curve number method, environment
Procedia PDF Downloads 2823725 The Impact of International Financial Reporting Standards (IFRS) Adoption on Performance’s Measure: A Study of UK Companies
Authors: Javad Izadi, Sahar Majioud
Abstract:
This study presents an approach of assessing the choice of performance measures of companies in the United Kingdom after the application of IFRS in 2005. The aim of this study is to investigate the effects of IFRS on the choice of performance evaluation methods for UK companies. We analyse through an econometric model the relationship of the dependent variable, the firm’s performance, which is a nominal variable with the independent ones. Independent variables are split into two main groups: the first one is the group of accounting-based measures: Earning per share, return on assets and return on equities. The second one is the group of market-based measures: market value of property plant and equipment, research and development, sales growth, market to book value, leverage, segment and size of companies. Concerning the regression used, it is a multinomial logistic regression performed on a sample of 130 UK listed companies. Our finding shows after IFRS adoption, and companies give more importance to some variables such as return on equities and sales growth to assess their performance, whereas the return on assets and market to book value ratio does not have as much importance as before IFRS in evaluating the performance of companies. Also, there are some variables that have no impact on the performance measures anymore, such as earning per share. This article finding is empirically important for business in subjects related to IFRS and companies’ performance measurement.Keywords: performance’s Measure, nominal variable, econometric model, evaluation methods
Procedia PDF Downloads 1393724 Anticipation of Bending Reinforcement Based on Iranian Concrete Code Using Meta-Heuristic Tools
Authors: Seyed Sadegh Naseralavi, Najmeh Bemani
Abstract:
In this paper, different concrete codes including America, New Zealand, Mexico, Italy, India, Canada, Hong Kong, Euro Code and Britain are compared with the Iranian concrete design code. First, by using Adaptive Neuro Fuzzy Inference System (ANFIS), the codes having the most correlation with the Iranian ninth issue of the national regulation are determined. Consequently, two anticipated methods are used for comparing the codes: Artificial Neural Network (ANN) and Multi-variable regression. The results show that ANN performs better. Predicting is done by using only tensile steel ratio and with ignoring the compression steel ratio.Keywords: adaptive neuro fuzzy inference system, anticipate method, artificial neural network, concrete design code, multi-variable regression
Procedia PDF Downloads 2863723 Nonparametric Sieve Estimation with Dependent Data: Application to Deep Neural Networks
Authors: Chad Brown
Abstract:
This paper establishes general conditions for the convergence rates of nonparametric sieve estimators with dependent data. We present two key results: one for nonstationary data and another for stationary mixing data. Previous theoretical results often lack practical applicability to deep neural networks (DNNs). Using these conditions, we derive convergence rates for DNN sieve estimators in nonparametric regression settings with both nonstationary and stationary mixing data. The DNN architectures considered adhere to current industry standards, featuring fully connected feedforward networks with rectified linear unit activation functions, unbounded weights, and a width and depth that grows with sample size.Keywords: sieve extremum estimates, nonparametric estimation, deep learning, neural networks, rectified linear unit, nonstationary processes
Procedia PDF Downloads 443722 Characterization of Printed Reflectarray Elements on Variable Substrate Thicknesses
Authors: M. Y. Ismail, Arslan Kiyani
Abstract:
Narrow bandwidth and high loss performance limits the use of reflectarray antennas in some applications. This article reports on the feasibility of employing strategic reflectarray resonant elements to characterize the reflectivity performance of reflectarrays in X-band frequency range. Strategic reflectarray resonant elements incorporating variable substrate thicknesses ranging from 0.016λ to 0.052λ have been analyzed in terms of reflection loss and reflection phase performance. The effect of substrate thickness has been validated by using waveguide scattering parameter technique. It has been demonstrated that as the substrate thickness is increased from 0.508mm to 1.57mm the measured reflection loss of dipole element decreased from 5.66dB to 3.70dB with increment in 10% bandwidth of 39MHz to 64MHz. Similarly the measured reflection loss of triangular loop element is decreased from 20.25dB to 7.02dB with an increment in 10% bandwidth of 12MHz to 23MHz. The results also show a significant decrease in the slope of reflection phase curve as well. A Figure of Merit (FoM) has also been defined for the comparison of static phase range of resonant elements under consideration. Moreover, a novel numerical model based on analytical equations has been established incorporating the material properties of dielectric substrate and electrical properties of different reflectarray resonant elements to obtain the progressive phase distribution for each individual reflectarray resonant element.Keywords: numerical model, reflectarray resonant elements, scattering parameter measurements, variable substrate thickness
Procedia PDF Downloads 2753721 Time of Week Intensity Estimation from Interval Censored Data with Application to Police Patrol Planning
Authors: Jiahao Tian, Michael D. Porter
Abstract:
Law enforcement agencies are tasked with crime prevention and crime reduction under limited resources. Having an accurate temporal estimate of the crime rate would be valuable to achieve such a goal. However, estimation is usually complicated by the interval-censored nature of crime data. We cast the problem of intensity estimation as a Poisson regression using an EM algorithm to estimate the parameters. Two special penalties are added that provide smoothness over the time of day and day of the week. This approach presented here provides accurate intensity estimates and can also uncover day-of-week clusters that share the same intensity patterns. Anticipating where and when crimes might occur is a key element to successful policing strategies. However, this task is complicated by the presence of interval-censored data. The censored data refers to the type of data that the event time is only known to lie within an interval instead of being observed exactly. This type of data is prevailing in the field of criminology because of the absence of victims for certain types of crime. Despite its importance, the research in temporal analysis of crime has lagged behind the spatial component. Inspired by the success of solving crime-related problems with a statistical approach, we propose a statistical model for the temporal intensity estimation of crime with censored data. The model is built on Poisson regression and has special penalty terms added to the likelihood. An EM algorithm was derived to obtain maximum likelihood estimates, and the resulting model shows superior performance to the competing model. Our research is in line with the smart policing initiative (SPI) proposed by the Bureau Justice of Assistance (BJA) as an effort to support law enforcement agencies in building evidence-based, data-driven law enforcement tactics. The goal is to identify strategic approaches that are effective in crime prevention and reduction. In our case, we allow agencies to deploy their resources for a relatively short period of time to achieve the maximum level of crime reduction. By analyzing a particular area within cities where data are available, our proposed approach could not only provide an accurate estimate of intensities for the time unit considered but a time-variation crime incidence pattern. Both will be helpful in the allocation of limited resources by either improving the existing patrol plan with the understanding of the discovery of the day of week cluster or supporting extra resources available.Keywords: cluster detection, EM algorithm, interval censoring, intensity estimation
Procedia PDF Downloads 663720 Quantitative Assessment of Soft Tissues by Statistical Analysis of Ultrasound Backscattered Signals
Authors: Da-Ming Huang, Ya-Ting Tsai, Shyh-Hau Wang
Abstract:
Ultrasound signals backscattered from the soft tissues are mainly depending on the size, density, distribution, and other elastic properties of scatterers in the interrogated sample volume. The quantitative analysis of ultrasonic backscattering is frequently implemented using the statistical approach due to that of backscattering signals tends to be with the nature of the random variable. Thus, the statistical analysis, such as Nakagami statistics, has been applied to characterize the density and distribution of scatterers of a sample. Yet, the accuracy of statistical analysis could be readily affected by the receiving signals associated with the nature of incident ultrasound wave and acoustical properties of samples. Thus, in the present study, efforts were made to explore such effects as the ultrasound operational modes and attenuation of biological tissue on the estimation of corresponding Nakagami statistical parameter (m parameter). In vitro measurements were performed from healthy and pathological fibrosis porcine livers using different single-element ultrasound transducers and duty cycles of incident tone burst ranging respectively from 3.5 to 7.5 MHz and 10 to 50%. Results demonstrated that the estimated m parameter tends to be sensitively affected by the use of ultrasound operational modes as well as the tissue attenuation. The healthy and pathological tissues may be characterized quantitatively by m parameter under fixed measurement conditions and proper calibration.Keywords: ultrasound backscattering, statistical analysis, operational mode, attenuation
Procedia PDF Downloads 3243719 An Efficient Fundamental Matrix Estimation for Moving Object Detection
Authors: Yeongyu Choi, Ju H. Park, S. M. Lee, Ho-Youl Jung
Abstract:
In this paper, an improved method for estimating fundamental matrix is proposed. The method is applied effectively to monocular camera based moving object detection. The method consists of corner points detection, moving object’s motion estimation and fundamental matrix calculation. The corner points are obtained by using Harris corner detector, motions of moving objects is calculated from pyramidal Lucas-Kanade optical flow algorithm. Through epipolar geometry analysis using RANSAC, the fundamental matrix is calculated. In this method, we have improved the performances of moving object detection by using two threshold values that determine inlier or outlier. Through the simulations, we compare the performances with varying the two threshold values.Keywords: corner detection, optical flow, epipolar geometry, RANSAC
Procedia PDF Downloads 4093718 An Enhanced Particle Swarm Optimization Algorithm for Multiobjective Problems
Authors: Houda Abadlia, Nadia Smairi, Khaled Ghedira
Abstract:
Multiobjective Particle Swarm Optimization (MOPSO) has shown an effective performance for solving test functions and real-world optimization problems. However, this method has a premature convergence problem, which may lead to lack of diversity. In order to improve its performance, this paper presents a hybrid approach which embedded the MOPSO into the island model and integrated a local search technique, Variable Neighborhood Search, to enhance the diversity into the swarm. Experiments on two series of test functions have shown the effectiveness of the proposed approach. A comparison with other evolutionary algorithms shows that the proposed approach presented a good performance in solving multiobjective optimization problems.Keywords: particle swarm optimization, migration, variable neighborhood search, multiobjective optimization
Procedia PDF Downloads 1683717 The Effects of Early Maternal Separation on Risky Choice in Rats
Authors: Osvaldo Collazo, Cristiano Valerio Dos Santos
Abstract:
Early maternal separation has been shown to bring about many negative effects on behavior in rats. In the present study, we evaluated the effects of early maternal separation on risky choice in rats. One group of male and female Wistar rats was exposed to an early maternal separation protocol while a control group was left undisturbed. Then both groups were exposed to a series of behavioral tests, including a test of risky choice, where one alternative offered a constant reward while the other offered a variable reward. There was a difference between groups when they chose between a variable and a constant reward delay, but no other difference was significant. These results suggest that early maternal separation may be related to a greater preference for shorter delays, which is characteristic of more impulsive choices.Keywords: early maternal separation, impulsivity, risky choice, variability
Procedia PDF Downloads 2583716 Nexus Between Agricultural Insurance Scheme and Performance of Agribusiness in Nigeria
Authors: Festus Epetimehin
Abstract:
Agriculture remains the dominant sector in the rural areas where over 70% of Nigerian reside and it’s still the backbone of our economy. The observed poor performance of farmers in agricultural productivity is due to the nature of risks and uncertainties in agriculture.Agricultural insurance is one of the mechanisms by which farmers can stabilize farm income and investment. The study examined the relationship between agricultural insurance scheme (AIS) and performance of agribusiness in Nigeria. The study adopted exploratory research design which is an ex-ante research approach. One hundred copies of structured questionnaire were administered for the purpose of the study. Correlation analysis and regression analysis were employed for the study. The correlation analysis of the finding revealed that the independent variable; agricultural insurance scheme (AIS) is positively and significantly correlated with the set of dependent variables; where turnover (ABT)=0.582**, profitability (ABP)=0.321**, solvency (ABS)=0.418**and cost of production (ABC)=0.23** respectively. The regression analysis result also revealed the degree of relationship between the independent variable (AIS) and set of dependent variables where one(1%) percent increase in independent variable will lead to 33.9% (ABT), 9.7% (ABP), 17.5%(ABS) and 1.5%(ABC).The study recommended that the Federal Government in collaboration with the participating Agricultural insurers embark on awareness campaign through to the length and breadth of Nigeria on government support and insurance scheme for farmers. Government should also ensure that the loan and insurance scheme should extend beyond the mechanized farmers and include the intensive subsistence farmers in view of the fact that they are the dominants in most of the farm produce markets.Keywords: agribusiness, agricultural insurance, performance, turnover, solvency, agricultural risks
Procedia PDF Downloads 973715 Instrumental Characterization of Cyanobacteria as Polyhydroxybutyrate Producer
Authors: Eva Slaninova, Diana Cernayova, Zuzana Sedrlova, Katerina Mrazova, Petr Sedlacek, Jana Nebesarova, Stanislav Obruca
Abstract:
Cyanobacteria are gram-negative prokaryotes belonging to a group of photosynthetic bacteria. In comparison with heterotrophic microorganisms, cyanobacteria utilize atmospheric nitrogen and carbon dioxide without any additional substrates. This ability of these microorganisms could be employed in biotechnology for the production of bioplastics, concretely polyhydroxyalkanoates (PHAs) which are primarily accumulated as a storage material in cells in the form of intracellular granules. In this study, there two cyanobacterial cultures from genera Synechocystis were used, namely Synechocystic sp. PCC 6803 and Synechocystis salina CCALA 192. There were optimized and used several various approaches, including microscopic techniques such as cryo-scanning electron microscopy (Cryo-SEM) and transmission electron microscopy (TEM), and fluorescence lifetime imaging microscopy using Nile red as a fluorescent probe (FLIM). Due to these instrumental techniques, the morphology of intracellular space and surface of cells were characterized. The next group of methods which were employed was spectroscopic techniques such as UV-Vis spectroscopy measured in two modes (turbidimetry and integration sphere) and Fourier transform infrared spectroscopy (FTIR). All these diverse techniques were used for the detection and characterization of pigments (chlorophylls, carotenoids, phycocyanin, etc.) and PHAs, in our case poly (3-hydroxybutyrate) (P3HB). To verify results, gas chromatography (GC) was employed concretely for the determination of the amount of P3HB in biomass. Cyanobacteria were also characterized as polyhydroxybutyrate producers by flow cytometer, which could count cells and at the same time distinguish cells including P3HB and without due to fluorescent probe called BODIPY and live/dead fluorescent probe SYTO Blue. Based on results, P3HB content in cyanobacteria cells was determined, as also the overall fitness of the cells. Acknowledgment: Funding: This study was partly funded by the projectGA19-29651L of the Czech Science Foundation (GACR) and partly funded by the Austrian Science Fund (FWF), project I 4082-B25.Keywords: cyanobacteria, fluorescent probe, microscopic techniques, poly(3hydroxybutyrate), spectroscopy, chromatography
Procedia PDF Downloads 2303714 The Relation between Earnings Management with the Financial Reporting
Authors: Anocha Rojanapanich
Abstract:
The objective of this research is to investigate the effects of earnings management on corporate transparency of the company in Dusit area workplace via financial reporting reliability and stakeholder acceptance as independent variable. And the company in Dusit are are taken as the population and sample. The questionnaire is used to collect data. Exploratory Factor Analysis is implemented to ensure construct validity, and correlation statistic is selected to test the relationship among all variable and the ordinary least squares regression is used to explore the hypothesized. The results show that earnings management has a significant and negative impact on financial reporting reliability, stakeholder acceptance, and corporate transparency. Both financial reporting reliability and stakeholder acceptance have an important and positive effect on corporate transparency, and they are then mediators of the earnings management-corporate transparency relationships.Keywords: dusit area workplace, earnings management, financial report, business and marketing management
Procedia PDF Downloads 4073713 Online Allocation and Routing for Blood Delivery in Conditions of Variable and Insufficient Supply: A Case Study in Thailand
Authors: Pornpimol Chaiwuttisak, Honora Smith, Yue Wu
Abstract:
Blood is a perishable product which suffers from physical deterioration with specific fixed shelf life. Although its value during the shelf life is constant, fresh blood is preferred for treatment. However, transportation costs are a major factor to be considered by administrators of Regional Blood Centres (RBCs) which act as blood collection and distribution centres. A trade-off must therefore be reached between transportation costs and short-term holding costs. In this paper we propose a number of algorithms for online allocation and routing of blood supplies, for use in conditions of variable and insufficient blood supply. A case study in northern Thailand provides an application of the allocation and routing policies tested. The plan proposed for daily allocation and distribution of blood supplies consists of two components: firstly, fixed routes are determined for the supply of hospitals which are far from an RBC. Over the planning period of one week, each hospital on the fixed routes is visited once. A robust allocation of blood is made to hospitals on the fixed routes that can be guaranteed on a suitably high percentage of days, despite variable supplies. Secondly, a variable daily route is employed for close-by hospitals, for which more than one visit per week may be needed to fulfil targets. The variable routing takes into account the amount of blood available for each day’s deliveries, which is only known on the morning of delivery. For hospitals on the variables routes, the day and amounts of deliveries cannot be guaranteed but are designed to attain targets over the six-day planning horizon. In the conditions of blood shortage encountered in Thailand, and commonly in other developing countries, it is often the case that hospitals request more blood than is needed, in the knowledge that only a proportion of all requests will be met. Our proposal is for blood supplies to be allocated and distributed to each hospital according to equitable targets based on historical demand data, calculated with regard to expected daily blood supplies. We suggest several policies that could be chosen by the decision makes for the daily distribution of blood. The different policies provide different trade-offs between transportation and holding costs. Variations in the costs of transportation, such as the price of petrol, could make different policies the most beneficial at different times. We present an application of the policies applied to a realistic case study in the RBC at Chiang Mai province which is located in Northern region of Thailand. The analysis includes a total of more than 110 hospitals, with 29 hospitals considered in the variable route. The study is expected to be a pilot for other regions of Thailand. Computational experiments are presented. Concluding remarks include the benefits gained by the online methods and future recommendations.Keywords: online algorithm, blood distribution, developing country, insufficient blood supply
Procedia PDF Downloads 3323712 Cycle Number Estimation Method on Fatigue Crack Initiation Using Voronoi Tessellation and the Tanaka Mura Model
Authors: Mohammad Ridzwan Bin Abd Rahim, Siegfried Schmauder, Yupiter HP Manurung, Peter Binkele, Meor Iqram B. Meor Ahmad, Kiarash Dogahe
Abstract:
This paper deals with the short crack initiation of the material P91 under cyclic loading at two different temperatures, concluded with the estimation of the short crack initiation Wöhler (S/N) curve. An artificial but representative model microstructure was generated using Voronoi tessellation and the Finite Element Method, and the non-uniform stress distribution was calculated accordingly afterward. The number of cycles needed for crack initiation is estimated on the basis of the stress distribution in the model by applying the physically-based Tanaka-Mura model. Initial results show that the number of cycles to generate crack initiation is strongly correlated with temperature.Keywords: short crack initiation, P91, Wöhler curve, Voronoi tessellation, Tanaka-Mura model
Procedia PDF Downloads 1013711 Residual Lifetime Estimation for Weibull Distribution by Fusing Expert Judgements and Censored Data
Authors: Xiang Jia, Zhijun Cheng
Abstract:
The residual lifetime of a product is the operation time between the current time and the time point when the failure happens. The residual lifetime estimation is rather important in reliability analysis. To predict the residual lifetime, it is necessary to assume or verify a particular distribution that the lifetime of the product follows. And the two-parameter Weibull distribution is frequently adopted to describe the lifetime in reliability engineering. Due to the time constraint and cost reduction, a life testing experiment is usually terminated before all the units have failed. Then the censored data is usually collected. In addition, other information could also be obtained for reliability analysis. The expert judgements are considered as it is common that the experts could present some useful information concerning the reliability. Therefore, the residual lifetime is estimated for Weibull distribution by fusing the censored data and expert judgements in this paper. First, the closed-forms concerning the point estimate and confidence interval for the residual lifetime under the Weibull distribution are both presented. Next, the expert judgements are regarded as the prior information and how to determine the prior distribution of Weibull parameters is developed. For completeness, the cases that there is only one, and there are more than two expert judgements are both focused on. Further, the posterior distribution of Weibull parameters is derived. Considering that it is difficult to derive the posterior distribution of residual lifetime, a sample-based method is proposed to generate the posterior samples of Weibull parameters based on the Monte Carlo Markov Chain (MCMC) method. And these samples are used to obtain the Bayes estimation and credible interval for the residual lifetime. Finally, an illustrative example is discussed to show the application. It demonstrates that the proposed method is rather simple, satisfactory, and robust.Keywords: expert judgements, information fusion, residual lifetime, Weibull distribution
Procedia PDF Downloads 1423710 Effective Dose and Size Specific Dose Estimation with and without Tube Current Modulation for Thoracic Computed Tomography Examinations: A Phantom Study
Authors: S. Gharbi, S. Labidi, M. Mars, M. Chelli, F. Ladeb
Abstract:
The purpose of this study is to reduce radiation dose for chest CT examination by including Tube Current Modulation (TCM) to a standard CT protocol. A scan of an anthropomorphic male Alderson phantom was performed on a 128-slice scanner. The estimation of effective dose (ED) in both scans with and without mAs modulation was done via multiplication of Dose Length Product (DLP) to a conversion factor. Results were compared to those measured with a CT-Expo software. The size specific dose estimation (SSDE) values were obtained by multiplication of the volume CT dose index (CTDIvol) with a conversion size factor related to the phantom’s effective diameter. Objective assessment of image quality was performed with Signal to Noise Ratio (SNR) measurements in phantom. SPSS software was used for data analysis. Results showed including CARE Dose 4D; ED was lowered by 48.35% and 51.51% using DLP and CT-expo, respectively. In addition, ED ranges between 7.01 mSv and 6.6 mSv in case of standard protocol, while it ranges between 3.62 mSv and 3.2 mSv with TCM. Similar results are found for SSDE; dose was higher without TCM of 16.25 mGy and was lower by 48.8% including TCM. The SNR values calculated were significantly different (p=0.03<0.05). The highest one is measured on images acquired with TCM and reconstructed with Filtered back projection (FBP). In conclusion, this study proves the potential of TCM technique in SSDE and ED reduction and in conserving image quality with high diagnostic reference level for thoracic CT examinations.Keywords: anthropomorphic phantom, computed tomography, CT-expo, radiation dose
Procedia PDF Downloads 222