Search results for: state estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9012

Search results for: state estimation

8832 Tool Wear Monitoring of High Speed Milling Based on Vibratory Signal Processing

Authors: Hadjadj Abdechafik, Kious Mecheri, Ameur Aissa

Abstract:

The objective of this study is to develop a process of treatment of the vibratory signals generated during a horizontal high speed milling process without applying any coolant in order to establish a monitoring system able to improve the machining performance. Thus, many tests were carried out on the horizontal high speed centre (PCI Météor 10), in given cutting conditions, by using a milling cutter with only one insert and measured its frontal wear from its new state that is considered as a reference state until a worn state that is considered as unsuitable for the tool to be used. The results obtained show that the first harmonic follow well the evolution of frontal wear, on another hand a wavelet transform is used for signal processing and is found to be useful for observing the evolution of the wavelet approximations through the cutting tool life. The power and the Root Mean Square (RMS) values of the wavelet transformed signal gave the best results and can be used for tool wear estimation. All this features can constitute the suitable indicators for an effective detection of tool wear and then used for the input parameters of an online monitoring system. Although we noted the remarkable influence of the machining cycle on the quality of measurements by the introduction of a bias on the signal, this phenomenon appears in particular in horizontal milling and in the majority of studies is ignored.

Keywords: flank wear, vibration, milling, signal processing, monitoring

Procedia PDF Downloads 595
8831 Games behind Bars: A Longitudinal Study of Inmates Pro-Social Preferences

Authors: Mario A. Maggioni, Domenico Rossignoli, Simona Beretta, Sara Balestri

Abstract:

The paper presents the results of a Longitudinal Randomized Control Trial implemented in 2016 two State Prisons in California (USA). The subjects were randomly assigned to a 10-months program (GRIP, Guiding Rage Into Power) aiming at undoing the destructive behavioral patterns that lead to criminal actions by raising the individual’s 'mindfulness'. This study tests whether the participation to this program (treatment), based on strong relationships and mutual help, affects pro-social behavior of participants, in particular with reference to trust and inequality aversion. The research protocol entails the administration of two questionnaires including a set of behavioral situations ('games') - widely used in the relevant literature in the field - to 80 inmates, 42 treated (enrolled in the program) and 38 controls. The first questionnaire has been administered before treatment and randomization took place; the second questionnaire at the end of the program. The results of a Difference-in-Differences estimation procedure, show that trust significantly increases GRIP participants to compared to the control group. The result is robust to alternative estimation techniques and to the inclusion of a set of covariates to further control for idiosyncratic characteristics of the prisoners.

Keywords: behavioral economics, difference in differences, longitudinal study, pro-social preferences

Procedia PDF Downloads 390
8830 Evaluation of Expected Annual Loss Probabilities of RC Moment Resisting Frames

Authors: Saemee Jun, Dong-Hyeon Shin, Tae-Sang Ahn, Hyung-Joon Kim

Abstract:

Building loss estimation methodologies which have been advanced considerably in recent decades are usually used to estimate socio and economic impacts resulting from seismic structural damage. In accordance with these methods, this paper presents the evaluation of an annual loss probability of a reinforced concrete moment resisting frame designed according to Korean Building Code. The annual loss probability is defined by (1) a fragility curve obtained from a capacity spectrum method which is similar to a method adopted from HAZUS, and (2) a seismic hazard curve derived from annual frequencies of exceedance per peak ground acceleration. Seismic fragilities are computed to calculate the annual loss probability of a certain structure using functions depending on structural capacity, seismic demand, structural response and the probability of exceeding damage state thresholds. This study carried out a nonlinear static analysis to obtain the capacity of a RC moment resisting frame selected as a prototype building. The analysis results show that the probability of being extensive structural damage in the prototype building is expected to 0.004% in a year.

Keywords: expected annual loss, loss estimation, RC structure, fragility analysis

Procedia PDF Downloads 396
8829 A Quantification Method of Attractiveness of Stations and an Estimation Method of Number of Passengers Taking into Consideration the Attractiveness of the Station

Authors: Naoya Ozaki, Takuya Watanabe, Ryosuke Matsumoto, Noriko Fukasawa

Abstract:

In the metropolitan areas in Japan, in many stations, shopping areas are set up, and escalators and elevators are installed to make the stations be barrier-free. Further, many areas around the stations are being redeveloped. Railway business operators want to know how much effect these circumstances have on attractiveness of the station or number of passengers using the station. So, we performed a questionnaire survey of the station users in the metropolitan areas for finding factors to affect the attractiveness of stations. Then, based on the analysis of the survey, we developed a method to quantitatively evaluate attractiveness of the stations. We also developed an estimation method for number of passengers based on combination of attractiveness of the station quantitatively evaluated and the residential and labor population around the station. Then, we derived precise linear regression models estimating the attractiveness of the station and number of passengers of the station.

Keywords: attractiveness of the station, estimation method, number of passengers of the station, redevelopment around the station, renovation of the station

Procedia PDF Downloads 284
8828 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models

Authors: I. V. Pinto, M. R. Sooriyarachchi

Abstract:

It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.

Keywords: goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, penalized quasi-likelihood, power, quasi-likelihood, type-I error

Procedia PDF Downloads 138
8827 Home Legacy Device Output Estimation Using Temperature and Humidity Information by Adaptive Neural Fuzzy Inference System

Authors: Sung Hyun Yoo, In Hwan Choi, Jun Ho Jung, Choon Ki Ahn, Myo Taeg Lim

Abstract:

Home energy management system (HEMS) has been issued to reduce the power consumption. The HEMS performs electric power control for the indoor electric device. However, HEMS commonly treats the smart devices. In this paper, we suggest the output estimation of home legacy device using the artificial neural fuzzy inference system (ANFIS). This paper discusses the overview and the architecture of the system. In addition, accurate performance of the output estimation using the ANFIS inference system is shown via a numerical example.

Keywords: artificial neural fuzzy inference system (ANFIS), home energy management system (HEMS), smart device, legacy device

Procedia PDF Downloads 537
8826 Development of a Shape Based Estimation Technology Using Terrestrial Laser Scanning

Authors: Gichun Cha, Byoungjoon Yu, Jihwan Park, Minsoo Park, Junghyun Im, Sehwan Park, Sujung Sin, Seunghee Park

Abstract:

The goal of this research is to estimate a structural shape change using terrestrial laser scanning. This study proceeds with development of data reduction and shape change estimation algorithm for large-capacity scan data. The point cloud of scan data was converted to voxel and sampled. Technique of shape estimation is studied to detect changes in structure patterns, such as skyscrapers, bridges, and tunnels based on large point cloud data. The point cloud analysis applies the octree data structure to speed up the post-processing process for change detection. The point cloud data is the relative representative value of shape information, and it used as a model for detecting point cloud changes in a data structure. Shape estimation model is to develop a technology that can detect not only normal but also immediate structural changes in the event of disasters such as earthquakes, typhoons, and fires, thereby preventing major accidents caused by aging and disasters. The study will be expected to improve the efficiency of structural health monitoring and maintenance.

Keywords: terrestrial laser scanning, point cloud, shape information model, displacement measurement

Procedia PDF Downloads 231
8825 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model

Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You

Abstract:

The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.

Keywords: DBSCAN, potential function, speech signal, the UBSS model

Procedia PDF Downloads 132
8824 The Influence of Air Temperature Controls in Estimation of Air Temperature over Homogeneous Terrain

Authors: Fariza Yunus, Jasmee Jaafar, Zamalia Mahmud, Nurul Nisa’ Khairul Azmi, Nursalleh K. Chang, Nursalleh K. Chang

Abstract:

Variation of air temperature from one place to another is cause by air temperature controls. In general, the most important control of air temperature is elevation. Another significant independent variable in estimating air temperature is the location of meteorological stations. Distances to coastline and land use type are also contributed to significant variations in the air temperature. On the other hand, in homogeneous terrain direct interpolation of discrete points of air temperature work well to estimate air temperature values in un-sampled area. In this process the estimation is solely based on discrete points of air temperature. However, this study presents that air temperature controls also play significant roles in estimating air temperature over homogenous terrain of Peninsular Malaysia. An Inverse Distance Weighting (IDW) interpolation technique was adopted to generate continuous data of air temperature. This study compared two different datasets, observed mean monthly data of T, and estimation error of T–T’, where T’ estimated value from a multiple regression model. The multiple regression model considered eight independent variables of elevation, latitude, longitude, coastline, and four land use types of water bodies, forest, agriculture and build up areas, to represent the role of air temperature controls. Cross validation analysis was conducted to review accuracy of the estimation values. Final results show, estimation values of T–T’ produced lower errors for mean monthly mean air temperature over homogeneous terrain in Peninsular Malaysia.

Keywords: air temperature control, interpolation analysis, peninsular Malaysia, regression model, air temperature

Procedia PDF Downloads 371
8823 Design Flood Estimation in Satluj Basin-Challenges for Sunni Dam Hydro Electric Project, Himachal Pradesh-India

Authors: Navneet Kalia, Lalit Mohan Verma, Vinay Guleria

Abstract:

Introduction: Design Flood studies are essential for effective planning and functioning of water resource projects. Design flood estimation for Sunni Dam Hydro Electric Project located in State of Himachal Pradesh, India, on the river Satluj, was a big challenge in view of the river flowing in the Himalayan region from Tibet to India, having a large catchment area of varying topography, climate, and vegetation. No Discharge data was available for the part of the river in Tibet, whereas, for India, it was available only at Khab, Rampur, and Luhri. The estimation of Design Flood using standard methods was not possible. This challenge was met using two different approaches for upper (snow-fed) and lower (rainfed) catchment using Flood Frequency Approach and Hydro-metrological approach. i) For catchment up to Khab Gauging site (Sub-Catchment, C1), Flood Frequency approach was used. Around 90% of the catchment area (46300 sqkm) up to Khab is snow-fed which lies above 4200m. In view of the predominant area being snow-fed area, 1 in 10000 years return period flood estimated using Flood Frequency analysis at Khab was considered as Probable Maximum Flood (PMF). The flood peaks were taken from daily observed discharges at Khab, which were increased by 10% to make them instantaneous. Design Flood of 4184 cumec thus obtained was considered as PMF at Khab. ii) For catchment between Khab and Sunni Dam (Sub-Catchment, C2), Hydro-metrological approach was used. This method is based upon the catchment response to the rainfall pattern observed (Probable Maximum Precipitation - PMP) in a particular catchment area. The design flood computation mainly involves the estimation of a design storm hyetograph and derivation of the catchment response function. A unit hydrograph is assumed to represent the response of the entire catchment area to a unit rainfall. The main advantage of the hydro-metrological approach is that it gives a complete flood hydrograph which allows us to make a realistic determination of its moderation effect while passing through a reservoir or a river reach. These studies were carried out to derive PMF for the catchment area between Khab and Sunni Dam site using a 1-day and 2-day PMP values of 232 and 416 cm respectively. The PMF so obtained was 12920.60 cumec. Final Result: As the Catchment area up to Sunni Dam has been divided into 2 sub-catchments, the Flood Hydrograph for the Catchment C1 has been routed through the connecting channel reach (River Satluj) using Muskingum method and accordingly, the Design Flood was computed after adding the routed flood ordinates with flood ordinates of catchment C2. The total Design Flood (i.e. 2-Day PMF) with a peak of 15473 cumec was obtained. Conclusion: Even though, several factors are relevant while deciding the method to be used for design flood estimation, data availability and the purpose of study are the most important factors. Since, generally, we cannot wait for the hydrological data of adequate quality and quantity to be available, flood estimation has to be done using whatever data is available. Depending upon the type of data available for a particular catchment, the method to be used is to be selected.

Keywords: design flood, design storm, flood frequency, PMF, PMP, unit hydrograph

Procedia PDF Downloads 322
8822 Real-Time Classification of Hemodynamic Response by Functional Near-Infrared Spectroscopy Using an Adaptive Estimation of General Linear Model Coefficients

Authors: Sahar Jahani, Meryem Ayse Yucel, David Boas, Seyed Kamaledin Setarehdan

Abstract:

Near-infrared spectroscopy allows monitoring of oxy- and deoxy-hemoglobin concentration changes associated with hemodynamic response function (HRF). HRF is usually affected by natural physiological hemodynamic (systemic interferences) which occur in all body tissues including brain tissue. This makes HRF extraction a very challenging task. In this study, we used Kalman filter based on a general linear model (GLM) of brain activity to define the proportion of systemic interference in the brain hemodynamic. The performance of the proposed algorithm is evaluated in terms of the peak to peak error (Ep), mean square error (MSE), and Pearson’s correlation coefficient (R2) criteria between the estimated and the simulated hemodynamic responses. This technique also has the ability of real time estimation of single trial functional activations as it was applied to classify finger tapping versus resting state. The average real-time classification accuracy of 74% over 11 subjects demonstrates the feasibility of developing an effective functional near infrared spectroscopy for brain computer interface purposes (fNIRS-BCI).

Keywords: hemodynamic response function, functional near-infrared spectroscopy, adaptive filter, Kalman filter

Procedia PDF Downloads 155
8821 Secularism and Political Inclusion: Turkey in the 2000s

Authors: Edgar Sar

Abstract:

For more than a decade, secularism’s compatibility with religion has been called into question. Particularly, secular states’ exclusionary practices were raised to prove that secularism is not necessary for democracy. Meanwhile, with the debut of Turkey’s Justice and Development Party (AKP) in 2002, Turkish state’s approach to religion has gradually changed. It is argued in that presentation that this change has led Turkey to a process of de-secularization, which refers to a considerable regress in state’s inclusionary and pluralist credentials. In this regard, this study both reflects on the relationship between secularism and democracy within the context of Turkish experience and analyses the consequences of the process of de-secularization of state in Turkey. To analyze Turkish state’s changing approach to religion and measure the de-secularization of the state, the connection between state and religion will be examined in three levels: ends, institutions, and law and policies. The presentation will indicate that Turkish state’s connection with religion in all three levels significantly weakened its secular credentials, which at the same time risked state’s commitment to neutrality, freedom of conscience and equality. In this regard, the change in Turkish state’s approach to religion throughout the 2000s, which this study refers to as the process of the de-secularization of the state, also brought about a process of de-democratization for Turkey.

Keywords: AKP, political inclusion, secularism, Turkey

Procedia PDF Downloads 341
8820 The Standardization of Colorado Schools to Offer Opportunity Through Equal Education

Authors: Heather Caldwell

Abstract:

In 1915, state superintendent, Mary C.C. Bradford initiated a state standardization plan in order to improve the quality of schools and the educational experience for all children in Colorado. This plan would change the schools, improving them and offering more opportunities for children, teachers, and the community. In a state where geography limited opportunity to make all schools equal and brought challenges to state school leaders to improve education throughout the state, the leadership prevailed and worked together with local schools and school leaders to make drastic improvements in the curriculum. This paper will discuss this plan and will highlight key contributions to this standardization plan that improved opportunities for all students in the state of Colorado through these educational initiatives.

Keywords: history of education, standardization, curriculum, state superintendent, women in education

Procedia PDF Downloads 49
8819 Analysis of Translational Ship Oscillations in a Realistic Environment

Authors: Chen Zhang, Bernhard Schwarz-Röhr, Alexander Härting

Abstract:

To acquire accurate ship motions at the center of gravity, a single low-cost inertial sensor is utilized and applied on board to measure ship oscillating motions. As observations, the three axes accelerations and three axes rotational rates provided by the sensor are used. The mathematical model of processing the observation data includes determination of the distance vector between the sensor and the center of gravity in x, y, and z directions. After setting up the transfer matrix from sensor’s own coordinate system to the ship’s body frame, an extended Kalman filter is applied to deal with nonlinearities between the ship motion in the body frame and the observation information in the sensor’s frame. As a side effect, the method eliminates sensor noise and other unwanted errors. Results are not only roll and pitch, but also linear motions, in particular heave and surge at the center of gravity. For testing, we resort to measurements recorded on a small vessel in a well-defined sea state. With response amplitude operators computed numerically by a commercial software (Seaway), motion characteristics are estimated. These agree well with the measurements after processing with the suggested method.

Keywords: extended Kalman filter, nonlinear estimation, sea trial, ship motion estimation

Procedia PDF Downloads 519
8818 Estimation of Rare and Clustered Population Mean Using Two Auxiliary Variables in Adaptive Cluster Sampling

Authors: Muhammad Nouman Qureshi, Muhammad Hanif

Abstract:

Adaptive cluster sampling (ACS) is specifically developed for the estimation of highly clumped populations and applied to a wide range of situations like animals of rare and endangered species, uneven minerals, HIV patients and drug users. In this paper, we proposed a generalized semi-exponential estimator with two auxiliary variables under the framework of ACS design. The expressions of approximate bias and mean square error (MSE) of the proposed estimator are derived. Theoretical comparisons of the proposed estimator have been made with existing estimators. A numerical study is conducted on real and artificial populations to demonstrate and compare the efficiencies of the proposed estimator. The results indicate that the proposed generalized semi-exponential estimator performed considerably better than all the adaptive and non-adaptive estimators considered in this paper.

Keywords: auxiliary information, adaptive cluster sampling, clustered populations, Hansen-Hurwitz estimation

Procedia PDF Downloads 234
8817 Indoor Real-Time Positioning and Mapping Based on Manhattan Hypothesis Optimization

Authors: Linhang Zhu, Hongyu Zhu, Jiahe Liu

Abstract:

This paper investigated a method of indoor real-time positioning and mapping based on the Manhattan world assumption. In indoor environments, relying solely on feature matching techniques or other geometric algorithms for sensor pose estimation inevitably resulted in cumulative errors, posing a significant challenge to indoor positioning. To address this issue, we adopt the Manhattan world hypothesis to optimize the camera pose algorithm based on feature matching, which improves the accuracy of camera pose estimation. A special processing method was applied to image data frames that conformed to the Manhattan world assumption. When similar data frames appeared subsequently, this could be used to eliminate drift in sensor pose estimation, thereby reducing cumulative errors in estimation and optimizing mapping and positioning. Through experimental verification, it is found that our method achieves high-precision real-time positioning in indoor environments and successfully generates maps of indoor environments. This provides effective technical support for applications such as indoor navigation and robot control.

Keywords: Manhattan world hypothesis, real-time positioning and mapping, feature matching, loopback detection

Procedia PDF Downloads 57
8816 Studies of Substituent and Solvent Effect on Spectroscopic Properties Of 6-OH-4-CH3, 7-OH-4-CH3 and 7-OH-4-CF3 Coumarin

Authors: Sanjay Kumar

Abstract:

This paper reports the solvent effects on the electronic absorption and fluorescence emission spectra of 6-OH-4-CH3, 7-OH-4-CH3 and 7-OH-4-CF3 coumarin derivatives having -OH, -CH3 and -CF3 substituent at different positions in various solvents (Polar and Non-Polar). The first excited singlet state dipole moment and ground state dipole moment were calculated using Bakhshiev, Kawski-Chamma-Viallet and Reichardt-Dimroth equations and were compared for all the coumarin studied. In all cases the dipole moments were found to be higher in the excited singlet state than in the ground state indicating a substantial redistribution of Π-electron density in the excited state. The angle between the excited singlet state and ground state dipole moment is also calculated. The red shift of the absorption and fluorescence emission bands, observed for all the coumarin studied upon increasing the solvent polarity indicating that the electronic transitions were Π → Π* nature.

Keywords: coumarin, solvent effects, absorption spectra, emission spectra, excited singlet state dipole moment, ground state dipole moment, solvatochromism

Procedia PDF Downloads 828
8815 Effects of Different Meteorological Variables on Reference Evapotranspiration Modeling: Application of Principal Component Analysis

Authors: Akinola Ikudayisi, Josiah Adeyemo

Abstract:

The correct estimation of reference evapotranspiration (ETₒ) is required for effective irrigation water resources planning and management. However, there are some variables that must be considered while estimating and modeling ETₒ. This study therefore determines the multivariate analysis of correlated variables involved in the estimation and modeling of ETₒ at Vaalharts irrigation scheme (VIS) in South Africa using Principal Component Analysis (PCA) technique. Weather and meteorological data between 1994 and 2014 were obtained both from South African Weather Service (SAWS) and Agricultural Research Council (ARC) in South Africa for this study. Average monthly data of minimum and maximum temperature (°C), rainfall (mm), relative humidity (%), and wind speed (m/s) were the inputs to the PCA-based model, while ETₒ is the output. PCA technique was adopted to extract the most important information from the dataset and also to analyze the relationship between the five variables and ETₒ. This is to determine the most significant variables affecting ETₒ estimation at VIS. From the model performances, two principal components with a variance of 82.7% were retained after the eigenvector extraction. The results of the two principal components were compared and the model output shows that minimum temperature, maximum temperature and windspeed are the most important variables in ETₒ estimation and modeling at VIS. In order words, ETₒ increases with temperature and windspeed. Other variables such as rainfall and relative humidity are less important and cannot be used to provide enough information about ETₒ estimation at VIS. The outcome of this study has helped to reduce input variable dimensionality from five to the three most significant variables in ETₒ modelling at VIS, South Africa.

Keywords: irrigation, principal component analysis, reference evapotranspiration, Vaalharts

Procedia PDF Downloads 249
8814 The Role of the State Budget: An Evaluation of Public Expenditures and Taxes in Turkey

Authors: Erdal Eroğlu, Özhan Çetinkaya

Abstract:

The purpose of this paper is to show how state plays a regulatory role in the relations of distribution by analyzing tax and expenditure in Turkey. This paper has two main arguments. First, state intervenes in economic and social life via budget policies and steers the relations of distribution within the scope of the reproduction of the capital accumulation and legitimacy. Secondly, a great amount of public expenditure benefits capital owners while state gains its tax income mainly from low and middle income groups.

Keywords: distribution, public expenditure, state budget, taxes

Procedia PDF Downloads 525
8813 Role of Spatial Variability in the Service Life Prediction of Reinforced Concrete Bridges Affected by Corrosion

Authors: Omran M. Kenshel, Alan J. O'Connor

Abstract:

Estimating the service life of Reinforced Concrete (RC) bridge structures located in corrosive marine environments of a great importance to their owners/engineers. Traditionally, bridge owners/engineers relied more on subjective engineering judgment, e.g. visual inspection, in their estimation approach. However, because financial resources are often limited, rational calculation methods of estimation are needed to aid in making reliable and more accurate predictions for the service life of RC structures. This is in order to direct funds to bridges found to be the most critical. Criticality of the structure can be considered either form the Structural Capacity (i.e. Ultimate Limit State) or from Serviceability viewpoint whichever is adopted. This paper considers the service life of the structure only from the Structural Capacity viewpoint. Considering the great variability associated with the parameters involved in the estimation process, the probabilistic approach is most suited. The probabilistic modelling adopted here used Monte Carlo simulation technique to estimate the Reliability (i.e. Probability of Failure) of the structure under consideration. In this paper the authors used their own experimental data for the Correlation Length (CL) for the most important deterioration parameters. The CL is a parameter of the Correlation Function (CF) by which the spatial fluctuation of a certain deterioration parameter is described. The CL data used here were produced by analyzing 45 chloride profiles obtained from a 30 years old RC bridge located in a marine environment. The service life of the structure were predicted in terms of the load carrying capacity of an RC bridge beam girder. The analysis showed that the influence of SV is only evident if the reliability of the structure is governed by the Flexure failure rather than by the Shear failure.

Keywords: Chloride-induced corrosion, Monte-Carlo simulation, reinforced concrete, spatial variability

Procedia PDF Downloads 470
8812 Count Data Regression Modeling: An Application to Spontaneous Abortion in India

Authors: Prashant Verma, Prafulla K. Swain, K. K. Singh, Mukti Khetan

Abstract:

Objective: In India, around 20,000 women die every year due to abortion-related complications. In the modelling of count variables, there is sometimes a preponderance of zero counts. This article concerns the estimation of various count regression models to predict the average number of spontaneous abortion among women in the Punjab state of India. It also assesses the factors associated with the number of spontaneous abortions. Materials and methods: The study included 27,173 married women of Punjab obtained from the DLHS-4 survey (2012-13). Poisson regression (PR), Negative binomial (NB) regression, zero hurdle negative binomial (ZHNB), and zero-inflated negative binomial (ZINB) models were employed to predict the average number of spontaneous abortions and to identify the determinants affecting the number of spontaneous abortions. Results: Statistical comparisons among four estimation methods revealed that the ZINB model provides the best prediction for the number of spontaneous abortions. Antenatal care (ANC) place, place of residence, total children born to a woman, woman's education and economic status were found to be the most significant factors affecting the occurrence of spontaneous abortion. Conclusions: The study offers a practical demonstration of techniques designed to handle count variables. Statistical comparisons among four estimation models revealed that the ZINB model provided the best prediction for the number of spontaneous abortions and is recommended to be used to predict the number of spontaneous abortions. The study suggests that women receive institutional Antenatal care to attain limited parity. It also advocates promoting higher education among women in Punjab, India.

Keywords: count data, spontaneous abortion, Poisson model, negative binomial model, zero hurdle negative binomial, zero-inflated negative binomial, regression

Procedia PDF Downloads 152
8811 An Energy Detection-Based Algorithm for Cooperative Spectrum Sensing in Rayleigh Fading Channel

Authors: H. Bakhshi, E. Khayyamian

Abstract:

Cognitive radios have been recognized as one of the most promising technologies dealing with the scarcity of the radio spectrum. In cognitive radio systems, secondary users are allowed to utilize the frequency bands of primary users when the bands are idle. Hence, how to accurately detect the idle frequency bands has attracted many researchers’ interest. Detection performance is sensitive toward noise power and gain fluctuation. Since signal to noise ratio (SNR) between primary user and secondary users are not the same and change over the time, SNR and noise power estimation is essential. In this paper, we present a cooperative spectrum sensing algorithm using SNR estimation to improve detection performance in the real situation.

Keywords: cognitive radio, cooperative spectrum sensing, energy detection, SNR estimation, spectrum sensing, rayleigh fading channel

Procedia PDF Downloads 444
8810 The Role of Police in Counterinsurgency: A Case Study of Tripura

Authors: Yagnik Patel

Abstract:

This paper will analyze and explain two main objectives. First, it will examine the emergence of the insurgency in the state of Tripura. The State of Tripura was facing the full blow of insurgency problem since 1978 after the formation of Tripura National Volunteers (TNV). But, the roots of this insurgency were found even before 1978. This study will analyze the roots and trajectory of insurgency in the Tripura. Second, it will examine the role played by the police in counterinsurgency in the State of Tripura. Even though state police are mandated for the maintenance of the law and order and public order (like every police), the state police of Tripura have played a significant role in curbing the insurgency by enhancing their counterinsurgency (COIN) capabilities and re-structuring the new comprehensive COIN doctrine. And by the end of May 2015, the State Government has lifted The Armed Forces (Special Powers) Act (AFSPA) from the State of Tripura, as declaiming of the violence. The fight against the insurgency, usually done by the military or para-military, but nowadays the police organization is also becoming a vital state apparatus. After Punjab police and Andhra Pradesh police, Tripura police have also successfully curbed the insurgency from the state. This was the third time when successful counterinsurgency did by the state police in India. This has shown the importance of the police in the fight against the insurgency. In this regard, this paper will use both quantitative and qualitative research methods for an explanatory case study to analyze and explain the roots, causes and the trajectory of insurgency in the state of Tripura and the role played by the police in COIN in Tripura. Along with this, the paper will also examine the successful ‘Police Model of Tripura’.

Keywords: counterinsurgency, insurgency, police, Tripura state rifles

Procedia PDF Downloads 183
8809 Methods of Variance Estimation in Two-Phase Sampling

Authors: Raghunath Arnab

Abstract:

The two-phase sampling which is also known as double sampling was introduced in 1938. In two-phase sampling, samples are selected in phases. In the first phase, a relatively large sample of size is selected by some suitable sampling design and only information on the auxiliary variable is collected. During the second phase, a sample of size is selected either from, the sample selected in the first phase or from the entire population by using a suitable sampling design and information regarding the study and auxiliary variable is collected. Evidently, two phase sampling is useful if the auxiliary information is relatively easy and cheaper to collect than the study variable as well as if the strength of the relationship between the variables and is high. If the sample is selected in more than two phases, the resulting sampling design is called a multi-phase sampling. In this article we will consider how one can use data collected at the first phase sampling at the stages of estimation of the parameter, stratification, selection of sample and their combinations in the second phase in a unified setup applicable to any sampling design and wider classes of estimators. The problem of the estimation of variance will also be considered. The variance of estimator is essential for estimating precision of the survey estimates, calculation of confidence intervals, determination of the optimal sample sizes and for testing of hypotheses amongst others. Although, the variance is a non-negative quantity but its estimators may not be non-negative. If the estimator of variance is negative, then it cannot be used for estimation of confidence intervals, testing of hypothesis or measure of sampling error. The non-negativity properties of the variance estimators will also be studied in details.

Keywords: auxiliary information, two-phase sampling, varying probability sampling, unbiased estimators

Procedia PDF Downloads 583
8808 Lead-Time Estimation Approach Using the Process Capability Index

Authors: Abdel-Aziz M. Mohamed

Abstract:

This research proposes a methodology to estimate the customer order lead time in the supply chain based on the process capability index. The cases when the process output is normally distributed and when it is not are considered. The relationships between the system capability indices in both service and manufacturing applications, delivery system reliability and the percentages of orders delivered after their promised due dates are presented. The proposed method can be used to examine the current process capability to deliver the orders before the promised lead-time. If the system was found to be incapable, the method can be used to help revise the current lead-time to a proper value according to the service reliability level selected by the management. Numerical examples and a case study describing the lead time estimation methodology and testing the system capability of delivering the orders before their promised due date are illustrated.

Keywords: lead-time estimation, process capability index, delivery system reliability, statistical analysis, service achievement index, service quality

Procedia PDF Downloads 554
8807 Causal Estimation for the Left-Truncation Adjusted Time-Varying Covariates under the Semiparametric Transformation Models of a Survival Time

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

In biomedical researches and randomized clinical trials, the most commonly interested outcomes are time-to-event so-called survival data. The importance of robust models in this context is to compare the effect of randomly controlled experimental groups that have a sense of causality. Causal estimation is the scientific concept of comparing the pragmatic effect of treatments conditional to the given covariates rather than assessing the simple association of response and predictors. Hence, the causal effect based semiparametric transformation model was proposed to estimate the effect of treatment with the presence of possibly time-varying covariates. Due to its high flexibility and robustness, the semiparametric transformation model which shall be applied in this paper has been given much more attention for estimation of a causal effect in modeling left-truncated and right censored survival data. Despite its wide applications and popularity in estimating unknown parameters, the maximum likelihood estimation technique is quite complex and burdensome in estimating unknown parameters and unspecified transformation function in the presence of possibly time-varying covariates. Thus, to ease the complexity we proposed the modified estimating equations. After intuitive estimation procedures, the consistency and asymptotic properties of the estimators were derived and the characteristics of the estimators in the finite sample performance of the proposed model were illustrated via simulation studies and Stanford heart transplant real data example. To sum up the study, the bias of covariates was adjusted via estimating the density function for truncation variable which was also incorporated in the model as a covariate in order to relax the independence assumption of failure time and truncation time. Moreover, the expectation-maximization (EM) algorithm was described for the estimation of iterative unknown parameters and unspecified transformation function. In addition, the causal effect was derived by the ratio of the cumulative hazard function of active and passive experiments after adjusting for bias raised in the model due to the truncation variable.

Keywords: causal estimation, EM algorithm, semiparametric transformation models, time-to-event outcomes, time-varying covariate

Procedia PDF Downloads 120
8806 Debate, Discontent and National Identity in a Secular State

Authors: Man Bahadur Shahu

Abstract:

The secularism is a controversial, debatable and misinterpreted issue since its endorsement in the 2007 constitution in Nepal. The unprecedented acts have been seen favoring and disfavoring against the secularism within the public domain—which creates the fallacies and suspicions in the rationalization and modernization process. This paper highlights three important points: first, the secularization suddenly ruptures the silence and institutional decline of religion within the state. Second, state effort on secularism simultaneously fosters the state neutrality and state separation from religious institutions that amplify the recognition of all religious groups through the equal treatment in their festivity, rituals, and practices. Third, no state would completely secular because of their deep-rooted mindset and disposition with their own religious faiths and beliefs that largely enhance intergroup conflict, dispute, riot and turbulence in post-secular period in the name of proselytizing and conversion.

Keywords: conflict, proselytizing, religion, secular

Procedia PDF Downloads 151
8805 The Comparison of Joint Simulation and Estimation Methods for the Geometallurgical Modeling

Authors: Farzaneh Khorram

Abstract:

This paper endeavors to construct a block model to assess grinding energy consumption (CCE) and pinpoint blocks with the highest potential for energy usage during the grinding process within a specified region. Leveraging geostatistical techniques, particularly joint estimation, or simulation, based on geometallurgical data from various mineral processing stages, our objective is to forecast CCE across the study area. The dataset encompasses variables obtained from 2754 drill samples and a block model comprising 4680 blocks. The initial analysis encompassed exploratory data examination, variography, multivariate analysis, and the delineation of geological and structural units. Subsequent analysis involved the assessment of contacts between these units and the estimation of CCE via cokriging, considering its correlation with SPI. The selection of blocks exhibiting maximum CCE holds paramount importance for cost estimation, production planning, and risk mitigation. The study conducted exploratory data analysis on lithology, rock type, and failure variables, revealing seamless boundaries between geometallurgical units. Simulation methods, such as Plurigaussian and Turning band, demonstrated more realistic outcomes compared to cokriging, owing to the inherent characteristics of geometallurgical data and the limitations of kriging methods.

Keywords: geometallurgy, multivariate analysis, plurigaussian, turning band method, cokriging

Procedia PDF Downloads 65
8804 Formulating a Flexible-Spread Fuzzy Regression Model Based on Dissemblance Index

Authors: Shih-Pin Chen, Shih-Syuan You

Abstract:

This study proposes a regression model with flexible spreads for fuzzy input-output data to cope with the situation that the existing measures cannot reflect the actual estimation error. The main idea is that a dissemblance index (DI) is carefully identified and defined for precisely measuring the actual estimation error. Moreover, the graded mean integration (GMI) representation is adopted for determining more representative numeric regression coefficients. Notably, to comprehensively compare the performance of the proposed model with other ones, three different criteria are adopted. The results from commonly used test numerical examples and an application to Taiwan's business monitoring indicator illustrate that the proposed dissemblance index method not only produces valid fuzzy regression models for fuzzy input-output data, but also has satisfactory and stable performance in terms of the total estimation error based on these three criteria.

Keywords: dissemblance index, forecasting, fuzzy sets, linear regression

Procedia PDF Downloads 354
8803 Marginal Productivity of Small Scale Yam and Cassava Farmers in Kogi State, Nigeria: Data Envelopment Analysis as a Complement

Authors: M. A. Ojo, O. A. Ojo, A. I. Odine, A. Ogaji

Abstract:

The study examined marginal productivity analysis of small scale yam and cassava farmers in Kogi State, Nigeria. Data used for the study were obtained from primary source using a multi-stage sampling technique with structured questionnaires administered to 150 randomly selected yam and cassava farmers from three Local Government Areas of the State. Description statistics, data envelopment analysis and Cobb-Douglas production function were used to analyze the data. The DEA result on the overall technical efficiency of the farmers showed that 40% of the sampled yam and cassava farmers in the study area were operating at frontier and optimum level of production with mean technical efficiency of 1.00. This implies that 60% of the yam and cassava farmers in the study area can still improve their level of efficiency through better utilization of available resources, given the current state of technology. The results of the Cobb-Douglas analysis of factors affecting the output of yam and cassava farmers showed that labour, planting materials, fertilizer and capital inputs positively and significantly affected the output of the yam and cassava farmers in the study area. The study further revealed that yam and cassava farms in the study area operated under increasing returns to scale. This result of marginal productivity analysis further showed that relatively efficient farms were more marginally productive in resource utilization This study also shows that estimating production functions without separating the farms to efficient and inefficient farms bias the parameter values obtained from such production function. It is therefore recommended that yam and cassava farmers in the study area should form cooperative societies so as to enable them have access to productive inputs that will enable them expand. Also, since using a single equation model for production function produces a bias parameter estimates as confirmed above, farms should, therefore, be decomposed into efficient and inefficient ones before production function estimation is done.

Keywords: marginal productivity, DEA, production function, Kogi state

Procedia PDF Downloads 478