Search results for: most likelihood amplitude estimation (MLQAE)
2502 Investigation of Vortex Induced Vibration and Galloping Characteristic for Various Shape Slender Bridge Hanger
Authors: Matza Gusto Andika, Syariefatunnisa
Abstract:
Hanger at the arch bridges is an important part to transfer load on the bridge deck onto the arch. Bridges are subjected to several types of loadings, such as dead load, temperature load, wind load, moving loads etc. Usually the hanger bridge has a typical bluff body shape such as circle, square, H beam, etc. When flow past bluff body, the flow separates from the body surface generating an unsteady broad wake. These vortices are shed to the wake periodically with some frequency that is related to the undisturbed wind speed and the size of the cross-section body by the well-known Strouhal relationship. The dynamic characteristic and hanger shape are crucial for the evaluation of vortex induced vibrations and structural vibrations. The effect of vortex induced vibration is not catastrophic as a flutter phenomenon, but it can make fatigue failure to the structure. Wind tunnel tests are conducted to investigate the VIV and galloping effect at circle, hexagonal, and H beam bluff body for hanger bridge. From this research, the hanger bridge with hexagonal shape has a minimum vibration amplitude due to VIV phenomenon compared to circle and H beam. However, when the wind bruises the acute angle of hexagon shape, the vibration amplitude of bridge hanger with hexagonal shape is higher than the other bluff body.Keywords: vortex induced vibration, hanger bridge, wind tunnel, galloping
Procedia PDF Downloads 2622501 Friction Estimation and Compensation for Steering Angle Control for Highly Automated Driving
Authors: Marcus Walter, Norbert Nitzsche, Dirk Odenthal, Steffen Müller
Abstract:
This contribution presents a friction estimator for industrial purposes which identifies Coulomb friction in a steering system. The estimator only needs a few, usually known, steering system parameters. Friction occurs on almost every mechanical system and has a negative influence on high-precision position control. This is demonstrated on a steering angle controller for highly automated driving. In this steering system the friction induces limit cycles which cause oscillating vehicle movement when the vehicle follows a given reference trajectory. When compensating the friction with the introduced estimator, limit cycles can be suppressed. This is demonstrated by measurements in a series vehicle.Keywords: friction estimation, friction compensation, steering system, lateral vehicle guidance
Procedia PDF Downloads 5122500 Aerodynamic Modeling Using Flight Data at High Angle of Attack
Authors: Rakesh Kumar, A. K. Ghosh
Abstract:
The paper presents the modeling of linear and nonlinear longitudinal aerodynamics using real flight data of Hansa-3 aircraft gathered at low and high angles of attack. The Neural-Gauss-Newton (NGN) method has been applied to model the linear and nonlinear longitudinal dynamics and estimate parameters from flight data. Unsteady aerodynamics due to flow separation at high angles of attack near stall has been included in the aerodynamic model using Kirchhoff’s quasi-steady stall model. NGN method is an algorithm that utilizes Feed Forward Neural Network (FFNN) and Gauss-Newton optimization to estimate the parameters and it does not require any a priori postulation of mathematical model or solving of equations of motion. NGN method was validated on real flight data generated at moderate angles of attack before application to the data at high angles of attack. The estimates obtained from compatible flight data using NGN method were validated by comparing with wind tunnel values and the maximum likelihood estimates. Validation was also carried out by comparing the response of measured motion variables with the response generated by using estimates a different control input. Next, NGN method was applied to real flight data generated by executing a well-designed quasi-steady stall maneuver. The results obtained in terms of stall characteristics and aerodynamic parameters were encouraging and reasonably accurate to establish NGN as a method for modeling nonlinear aerodynamics from real flight data at high angles of attack.Keywords: parameter estimation, NGN method, linear and nonlinear, aerodynamic modeling
Procedia PDF Downloads 4432499 Design of Rigid L-Shaped Retaining Walls
Authors: Ahmed Rouili
Abstract:
Cantilever L-shaped walls are known to be relatively economical as retaining solution. The design starts by proportioning the wall dimensions for which the stability is checked for. A ratio between the lengths of the base and the stem, falling between 0,5 to 0,7, ensure the stability requirements in most cases. However, the displacement pattern of the wall in terms of rotations and translations, and the lateral pressure profile, do not have the same figure for all wall’s proportioning, as it is usually assumed. In the present work, the results of a numerical analysis are presented, different wall geometries were considered. The results show that the proportioning governs the equilibrium between the instantaneous rotation and the translation of the wall-toe, also, the lateral pressure estimation based on the average value between the at-rest and the active pressure, recommended by most design standards, is found to be not applicable for all walls.Keywords: cantilever wall, proportioning, numerical analysis, lateral pressure estimation
Procedia PDF Downloads 3232498 Presenting a Model Based on Artificial Neural Networks to Predict the Execution Time of Design Projects
Authors: Hamed Zolfaghari, Mojtaba Kord
Abstract:
After feasibility study the design phase is started and the rest of other phases are highly dependent on this phase. forecasting the duration of design phase could do a miracle and would save a lot of time. This study provides a fast and accurate Machine learning (ML) and optimization framework, which allows a quick duration estimation of project design phase, hence improving operational efficiency and competitiveness of a design construction company. 3 data sets of three years composed of daily time spent for different design projects are used to train and validate the ML models to perform multiple projects. Our study concluded that Artificial Neural Network (ANN) performed an accuracy of 0.94.Keywords: time estimation, machine learning, Artificial neural network, project design phase
Procedia PDF Downloads 962497 Retail Strategy to Reduce Waste Keeping High Profit Utilizing Taylor's Law in Point-of-Sales Data
Authors: Gen Sakoda, Hideki Takayasu, Misako Takayasu
Abstract:
Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.Keywords: food waste reduction, particle filter, point-of-sales, sustainable development goals, Taylor's law, time series analysis
Procedia PDF Downloads 1302496 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods
Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer
Abstract:
Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.Keywords: cross-validation, importance sampling, information criteria, predictive accuracy
Procedia PDF Downloads 3912495 Kalman Filter Design in Structural Identification with Unknown Excitation
Authors: Z. Masoumi, B. Moaveni
Abstract:
This article is about first step of structural health monitoring by identifying structural system in the presence of unknown input. In the structural system identification, identification of structural parameters such as stiffness and damping are considered. In this study, the Kalman filter (KF) design for structural systems with unknown excitation is expressed. External excitations, such as earthquakes, wind or any other forces are not measured or not available. The purpose of this filter is its strengths to estimate the state variables of the system in the presence of unknown input. Also least squares estimation (LSE) method with unknown input is studied. Estimates of parameters have been adopted. Finally, using two examples advantages and drawbacks of both methods are studied.Keywords: Kalman filter (KF), least square estimation (LSE), structural health monitoring (SHM), structural system identification
Procedia PDF Downloads 3152494 Study of Rayleigh-Bénard-Brinkman Convection Using LTNE Model and Coupled, Real Ginzburg-Landau Equations
Authors: P. G. Siddheshwar, R. K. Vanishree, C. Kanchana
Abstract:
A local nonlinear stability analysis using a eight-mode expansion is performed in arriving at the coupled amplitude equations for Rayleigh-Bénard-Brinkman convection (RBBC) in the presence of LTNE effects. Streamlines and isotherms are obtained in the two-dimensional unsteady finite-amplitude convection regime. The parameters’ influence on heat transport is found to be more pronounced at small time than at long times. Results of the Rayleigh-Bénard convection is obtained as a particular case of the present study. Additional modes are shown not to significantly influence the heat transport thus leading us to infer that five minimal modes are sufficient to make a study of RBBC. The present problem that uses rolls as a pattern of manifestation of instability is a needed first step in the direction of making a very general non-local study of two-dimensional unsteady convection. The results may be useful in determining the preferred range of parameters’ values while making rheometric measurements in fluids to ascertain fluid properties such as viscosity. The results of LTE are obtained as a limiting case of the results of LTNE obtained in the paper.Keywords: coupled Ginzburg–Landau model, local thermal non-equilibrium (LTNE), local thermal equilibrium (LTE), Rayleigh–Bénard-Brinkman convection
Procedia PDF Downloads 2362493 Thermal and Geometric Effects on Nonlinear Response of Incompressible Hyperelastic Cylindrical Shells
Authors: Morteza Shayan Arani, Mohammadamin Esmailzadehazimi, Mohammadreza Moeini, Mohammad Toorani, Aouni A. Lakis
Abstract:
This paper investigates the nonlinear response of thin, incompressible, hyperelastic cylindrical shells in the presence of a time-varying temperature field while considering initial geometric imperfections. The governing equations of motion are derived using an improved Donnell's shallow shell theory. The hyperelastic material is modeled using the Mooney-Rivlin model with two parameters, incorporating temperature-dependent terms. The Lagrangian method is applied to obtain the equation of motion. The resulting governing equation is addressed through the Lindstedt-Poincaré and Multiple Scale methods. The linear and nonlinear models presented in this study are verified against existing open literature, demonstrating the accuracy and reliability of the presented model. The study focuses on understanding the influence of temperature variations and geometrical imperfections on the natural frequency and amplitude-frequency response of the systems. Notably, the investigation reveals the coexistence of hardening and softening peaks in the amplitude-frequency response, which vary in magnitude depending on these parameters. Additionally, resonance peaks exhibit changes as a result of temperature and geometric imperfections.Keywords: hyperelastic material, cylindrical shell, geometrical nonlinearity, material naolinearity, initial geometric imperfection, temperature gradient, hardening and softening
Procedia PDF Downloads 712492 Comparison of Two-Phase Critical Flow Models for Estimation of Leak Flow Rate through Cracks
Authors: Tadashi Watanabe, Jinya Katsuyama, Akihiro Mano
Abstract:
The estimation of leak flow rates through narrow cracks in structures is of importance for nuclear reactor safety, since the leak flow could be detected before occurrence of loss-of-coolant accidents. The two-phase critical leak flow rates are calculated using the system analysis code, and two representative non-homogeneous critical flow models, Henry-Fauske model and Ransom-Trapp model, are compared. The pressure decrease and vapor generation in the crack, and the leak flow rates are found to be larger for the Henry-Fauske model. It is shown that the leak flow rates are not affected by the structural temperature, but affected largely by the roughness of crack surface.Keywords: crack, critical flow, leak, roughness
Procedia PDF Downloads 1782491 Education Levels & University Student’s Income: Primary Data Analysis from the Universities of Punjab, Pakistan
Authors: Muhammad Ashraf
Abstract:
It is experimentally conceded reality that education not just promotes social and intellectual abilities yet, in addition, the incomes of people. The present study is directed to investigate the connection between education level and student income. Data of different education levels is acquired from 300 students through field review from four public sector Universities; two from upper Punjab (University of Gujarat and Government college university-Lahore) and two from lower Punjab (Islamia University-Bahawalpur and The University of Sahiwal). Two-phase estimation is based on the Mincerian human capital model. The first stage presents statistical/descriptive investigation, which shows positive linkage among higher education and income of the students. Econometric estimation is estimated in the second stage by applying Ordinary least Square Method (OLS). Econometric examination reaffirms the importance of higher education as the impact of higher education on students’ incomes accelerates as we move from lower-level education to higher-level education. Educational levels, experience, and working hours are sure and noteworthy with student’s income. Econometric estimation additionally investigated that M. Phil and Ph.D. students have a higher income than bachelor students. Concerning the students, the income profile study commended that the Government ought to give part-time jobs or internships to students as indicated to labor market demand.Keywords: education, student’s income, experience, universities
Procedia PDF Downloads 1152490 A Diagnostic Comparative Analysis of on Simultaneous Localization and Mapping (SLAM) Models for Indoor and Outdoor Route Planning and Obstacle Avoidance
Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari
Abstract:
In robotics literature, the simultaneous localization and mapping (SLAM) is commonly associated with a priori-posteriori problem. The autonomous vehicle needs a neutral map to spontaneously track its local position, i.e., “localization” while at the same time a precise path estimation of the environment state is required for effective route planning and obstacle avoidance. On the other hand, the environmental noise factors can significantly intensify the inherent uncertainties in using odometry information and measurements obtained from the robot’s exteroceptive sensor which in return directly affect the overall performance of the corresponding SLAM. Therefore, the current work is primarily dedicated to provide a diagnostic analysis of six SLAM algorithms including FastSLAM, L-SLAM, GraphSLAM, Grid SLAM and DP-SLAM. A SLAM simulated environment consisting of two sets of landmark locations and robot waypoints was set based on modified EKF and UKF in MATLAB using two separate maps for indoor and outdoor route planning subject to natural and artificial obstacles. The simulation results are expected to provide an unbiased platform to compare the estimation performances of the five SLAM models as well as on the reliability of each SLAM model for indoor and outdoor applications.Keywords: route planning, obstacle, estimation performance, FastSLAM, L-SLAM, GraphSLAM, Grid SLAM, DP-SLAM
Procedia PDF Downloads 4432489 A Comprehensive Analysis of the Phylogenetic Signal in Ramp Sequences in 211 Vertebrates
Authors: Lauren M. McKinnon, Justin B. Miller, Michael F. Whiting, John S. K. Kauwe, Perry G. Ridge
Abstract:
Background: Ramp sequences increase translational speed and accuracy when rare, slowly-translated codons are found at the beginnings of genes. Here, the results of the first analysis of ramp sequences in a phylogenetic construct are presented. Methods: Ramp sequences were compared from 211 vertebrates (110 Mammalian and 101 non-mammalian). The presence and absence of ramp sequences were analyzed as a binary character in a parsimony and maximum likelihood framework. Additionally, ramp sequences were mapped to the Open Tree of Life taxonomy to determine the number of parallelisms and reversals that occurred, and these results were compared to what would be expected due to random chance. Lastly, aligned nucleotides in ramp sequences were compared to the rest of the sequence in order to examine possible differences in phylogenetic signal between these regions of the gene. Results: Parsimony and maximum likelihood analyses of the presence/absence of ramp sequences recovered phylogenies that are highly congruent with established phylogenies. Additionally, the retention index of ramp sequences is significantly higher than would be expected due to random chance (p-value = 0). A chi-square analysis of completely orthologous ramp sequences resulted in a p-value of approximately zero as compared to random chance. Discussion: Ramp sequences recover comparable phylogenies as other phylogenomic methods. Although not all ramp sequences appear to have a phylogenetic signal, more ramp sequences track speciation than expected by random chance. Therefore, ramp sequences may be used in conjunction with other phylogenomic approaches.Keywords: codon usage bias, phylogenetics, phylogenomics, ramp sequence
Procedia PDF Downloads 1562488 The Cost and Benefit on the Investment in Safety and Health of the Enterprises in Thailand
Authors: Charawee Butbumrung
Abstract:
The purpose of this study is to evaluate the monetary worthiness of investment and the usefulness of risk estimation as a tool employed by a production section of an electronic factory. This study employed the case study of accidents occurring in production areas. Data is collected from interviews with six production of safety coordinators and collect the information from the relevant section. The study will present the ratio of benefits compared with the operation costs for investment. The result showed that it is worthwhile for investment with the safety measures. In addition, the organizations must be able to analyze the causes of accidents about the benefits of investing in protective working process. They also need to quickly provide the manual for the staff to learn how to protect themselves from accidents and how to use all of the safety equipment.Keywords: cost and benefit, enterprises in Thailand, investment in safety and health, risk estimation
Procedia PDF Downloads 2632487 Estimation of Population Mean under Random Non-Response in Two-Occasion Successive Sampling
Authors: M. Khalid, G. N. Singh
Abstract:
In this paper, we have considered the problems of estimation for the population mean on current (second) occasion in two-occasion successive sampling under random non-response situations. Some modified exponential type estimators have been proposed and their properties are studied under the assumptions that the number of sampling unit follows a discrete distribution due to random non-response situations. The performances of the proposed estimators are compared with linear combinations of two estimators, (a) sample mean estimator for fresh sample and (b) ratio estimator for matched sample under the complete response situations. Results are demonstrated through empirical studies which present the effectiveness of the proposed estimators. Suitable recommendations have been made to the survey practitioners.Keywords: modified exponential estimator, successive sampling, random non-response, auxiliary variable, bias, mean square error
Procedia PDF Downloads 3492486 Study on Acoustic Source Detection Performance Improvement of Microphone Array Installed on Drones Using Blind Source Separation
Authors: Youngsun Moon, Yeong-Ju Go, Jong-Soo Choi
Abstract:
Most drones that currently have surveillance/reconnaissance missions are basically equipped with optical equipment, but we also need to use a microphone array to estimate the location of the acoustic source. This can provide additional information in the absence of optical equipment. The purpose of this study is to estimate Direction of Arrival (DOA) based on Time Difference of Arrival (TDOA) estimation of the acoustic source in the drone. The problem is that it is impossible to measure the clear target acoustic source because of the drone noise. To overcome this problem is to separate the drone noise and the target acoustic source using Blind Source Separation(BSS) based on Independent Component Analysis(ICA). ICA can be performed assuming that the drone noise and target acoustic source are independent and each signal has non-gaussianity. For maximized non-gaussianity each signal, we use Negentropy and Kurtosis based on probability theory. As a result, we can improve TDOA estimation and DOA estimation of the target source in the noisy environment. We simulated the performance of the DOA algorithm applying BSS algorithm, and demonstrated the simulation through experiment at the anechoic wind tunnel.Keywords: aeroacoustics, acoustic source detection, time difference of arrival, direction of arrival, blind source separation, independent component analysis, drone
Procedia PDF Downloads 1612485 Ultra-Tightly Coupled GNSS/INS Based on High Degree Cubature Kalman Filtering
Authors: Hamza Benzerrouk, Alexander Nebylov
Abstract:
In classical GNSS/INS integration designs, the loosely coupled approach uses the GNSS derived position and the velocity as the measurements vector. This design is suboptimal from the standpoint of preventing GNSSoutliers/outages. The tightly coupled GPS/INS navigation filter mixes the GNSS pseudo range and inertial measurements and obtains the vehicle navigation state as the final navigation solution. The ultra‐tightly coupled GNSS/INS design combines the I (inphase) and Q(quadrature) accumulator outputs in the GNSS receiver signal tracking loops and the INS navigation filter function intoa single Kalman filter variant (EKF, UKF, SPKF, CKF and HCKF). As mentioned, EKF and UKF are the most used nonlinear filters in the literature and are well adapted to inertial navigation state estimation when integrated with GNSS signal outputs. In this paper, it is proposed to move a step forward with more accurate filters and modern approaches called Cubature and High Degree cubature Kalman Filtering methods, on the basis of previous results solving the state estimation based on INS/GNSS integration, Cubature Kalman Filter (CKF) and High Degree Cubature Kalman Filter with (HCKF) are the references for the recent developed generalized Cubature rule based Kalman Filter (GCKF). High degree cubature rules are the kernel of the new solution for more accurate estimation with less computational complexity compared with the Gauss-Hermite Quadrature (GHQKF). Gauss-Hermite Kalman Filter GHKF which is not selected in this work because of its limited real-time implementation in high-dimensional state-spaces. In ultra tightly or a deeply coupled GNSS/INS system is dynamics EKF is used with transition matrix factorization together with GNSS block processing which is well described in the paper and assumes available the intermediary frequency IF by using a correlator samples with a rate of 500 Hz in the presented approach. GNSS (GPS+GLONASS) measurements are assumed available and modern SPKF with Cubature Kalman Filter (CKF) are compared with new versions of CKF called high order CKF based on Spherical-radial cubature rules developed at the fifth order in this work. Estimation accuracy of the high degree CKF is supposed to be comparative to GHKF, results of state estimation are then observed and discussed for different initialization parameters. Results show more accurate navigation state estimation and more robust GNSS receiver when Ultra Tightly Coupled approach applied based on High Degree Cubature Kalman Filter.Keywords: GNSS, INS, Kalman filtering, ultra tight integration
Procedia PDF Downloads 2792484 The Impact of Diversification Strategy on Leverage and Accrual-Based Earnings Management
Authors: Safa Lazzem, Faouzi Jilani
Abstract:
The aim of this research is to investigate the impact of diversification strategy on the nature of the relationship between leverage and accrual-based earnings management through panel-estimation techniques based on a sample of 162 nonfinancial French firms indexed in CAC All-Tradable during the period from 2006 to 2012. The empirical results show that leverage increases encourage managers to manipulate earnings management. Our findings prove that the diversification strategy provides the needed context for this accounting practice to be possible in highly diversified firms. In addition, the results indicate that diversification moderates the relationship between leverage and accrual-based earnings management by changing the nature and the sign of this relationship.Keywords: diversification, earnings management, leverage, panel-estimation techniques
Procedia PDF Downloads 1482483 Telling the Truth to Patients Before Hip Fracture Surgery
Authors: Rawan Masarwa, Merav Ben Natan, Yaron Berkovich
Abstract:
Background: Hip fracture repair surgery carries a certain mortality risk, yet evidence suggests that orthopedic surgeons often refrain from discussing this issue with patients prior to surgery. Aim: This study aims to examine whether orthopedic surgeons address the issue of one-year post-surgery mortality before hip fracture repair surgery and to explore the factors influencing this decision. Method: The study uses a cross-sectional design, administering validated digital questionnaires to 150 orthopedic surgeons. Results: A minority of orthopedic surgeons reported consistently informing patients about the risk of mortality in the year following hip fracture surgery. The primary reasons for not discussing this risk were a desire to avoid frightening patients, time constraints, and concerns about undermining patient hope. Surgeons reported a medium-high level of perceived self-efficacy, with higher self-efficacy linked to a reduced likelihood of discussing one-year mortality risk. In contrast, older age and holding a specialist status in orthopedic surgery were associated with a higher likelihood of discussing this risk with patients. Conclusions: The findings suggest a need for interventions to address communication barriers and ensure consistent provision of essential information to patients undergoing hip fracture surgery. Additionally, they emphasize the importance of considering individual factors such as self-efficacy, age, and expertise in developing strategies to enhance patient-provider communication in orthopedic care settings.Keywords: orthopedic surgeons, hip fracture surgery, mortality risk communication, patient information
Procedia PDF Downloads 242482 Computational Models for Accurate Estimation of Joint Forces
Authors: Ibrahim Elnour Abdelrahman Eltayeb
Abstract:
Computational modelling is a method used to investigate joint forces during a movement. It can get high accuracy in the joint forces via subject-specific models. However, the construction of subject-specific models remains time-consuming and expensive. The purpose of this paper was to identify what alterations we can make to generic computational models to get a better estimation of the joint forces. It appraised the impact of these alterations on the accuracy of the estimated joint forces. It found different strategies of alterations: joint model, muscle model, and an optimisation problem. All these alterations affected joint contact force accuracy, so showing the potential for improving the model predictions without involving costly and time-consuming medical images.Keywords: joint force, joint model, optimisation problem, validation
Procedia PDF Downloads 1682481 Application of ANN for Estimation of Power Demand of Villages in Sulaymaniyah Governorate
Abstract:
Before designing an electrical system, the estimation of load is necessary for unit sizing and demand-generation balancing. The system could be a stand-alone system for a village or grid connected or integrated renewable energy to grid connection, especially as there are non–electrified villages in developing countries. In the classical model, the energy demand was found by estimating the household appliances multiplied with the amount of their rating and the duration of their operation, but in this paper, information exists for electrified villages could be used to predict the demand, as villages almost have the same life style. This paper describes a method used to predict the average energy consumed in each two months for every consumer living in a village by Artificial Neural Network (ANN). The input data are collected using a regional survey for samples of consumers representing typical types of different living, household appliances and energy consumption by a list of information, and the output data are collected from administration office of Piramagrun for each corresponding consumer. The result of this study shows that the average demand for different consumers from four villages in different months throughout the year is approximately 12 kWh/day, this model estimates the average demand/day for every consumer with a mean absolute percent error of 11.8%, and MathWorks software package MATLAB version 7.6.0 that contains and facilitate Neural Network Toolbox was used.Keywords: artificial neural network, load estimation, regional survey, rural electrification
Procedia PDF Downloads 1222480 Analysis of Two-Phase Flow Instabilities in Conventional Channel of Nuclear Power Reactor
Authors: M. Abdur Rashid Sarkar, Riffat Mahmud
Abstract:
Boiling heat transfer plays a crucial role in cooling nuclear reactor for safe electricity generation. A two phase flow is susceptible to thermal-hydrodynamic instabilities, which may cause flow oscillations of constant amplitude or diverging amplitude. These oscillations may induce boiling crisis, disturb control systems, or cause mechanical damage. Based on their mechanisms, various types of instabilities can be classified for a nuclear reactor. From a practical engineering point of view one of the major design difficulties in dealing with multiphase flow is that the mass, momentum, and energy transfer rates and processes may be quite sensitive to the geometric configuration of the heat transfer surface. Moreover, the flow within each phase or component will clearly depend on that geometric configuration. The complexity of this two-way coupling presents a major challenge in the study of multiphase flows and there is much that remains to be done. Yet, the parametric effects on flow instability such as the effect of aspect ratio, pressure drop, channel length, its orientation inlet subcooling and surface roughness etc. have been analyzed. Another frequently occurring instability, known as the Kelvin–Helmholtz instability has been briefly reviewed. Various analytical techniques for predicting parametric effect on the instability are analyzed in terms of their applicability and accuracy.Keywords: two phase flows, boiling crisis, thermal-hydrodynamic instabilities, water cooled nuclear reactors, kelvin–helmholtz instability
Procedia PDF Downloads 3962479 Software Defect Analysis- Eclipse Dataset
Authors: Amrane Meriem, Oukid Salyha
Abstract:
The presence of defects or bugs in software can lead to costly setbacks, operational inefficiencies, and compromised user experiences. The integration of Machine Learning(ML) techniques has emerged to predict and preemptively address software defects. ML represents a proactive strategy aimed at identifying potential anomalies, errors, or vulnerabilities within code before they manifest as operational issues. By analyzing historical data, such as code changes, feature im- plementations, and defect occurrences. This en- ables development teams to anticipate and mitigate these issues, thus enhancing software quality, reducing maintenance costs, and ensuring smoother user interactions. In this work, we used a recommendation system to improve the performance of ML models in terms of predicting the code severity and effort estimation.Keywords: software engineering, machine learning, bugs detection, effort estimation
Procedia PDF Downloads 842478 Statistical Analysis of Extreme Flow (Regions of Chlef)
Authors: Bouthiba Amina
Abstract:
The estimation of the statistics bound to the precipitation represents a vast domain, which puts numerous challenges to meteorologists and hydrologists. Sometimes, it is necessary, to approach in value the extreme events for sites where there is little, or no datum, as well as their periods of return. The search for a model of the frequency of the heights of daily rains dresses a big importance in operational hydrology: It establishes a basis for predicting the frequency and intensity of floods by estimating the amount of precipitation in past years. The most known and the most common approach is the statistical approach, It consists in looking for a law of probability that fits best the values observed by the random variable " daily maximal rain " after a comparison of various laws of probability and methods of estimation by means of tests of adequacy. Therefore, a frequent analysis of the annual series of daily maximal rains was realized on the data of 54 pluviometric stations of the pond of high and average. This choice was concerned with five laws usually applied to the study and the analysis of frequent maximal daily rains. The chosen period is from 1970 to 2013. It was of use to the forecast of quantiles. The used laws are the law generalized by extremes to three components, those of the extreme values to two components (Gumbel and log-normal) in two parameters, the law Pearson typifies III and Log-Pearson III in three parameters. In Algeria, Gumbel's law has been used for a long time to estimate the quantiles of maximum flows. However, and we will check and choose the most reliable law.Keywords: return period, extreme flow, statistics laws, Gumbel, estimation
Procedia PDF Downloads 772477 Non-Parametric, Unconditional Quantile Estimation of Efficiency in Microfinance Institutions
Authors: Komlan Sedzro
Abstract:
We apply the non-parametric, unconditional, hyperbolic order-α quantile estimator to appraise the relative efficiency of Microfinance Institutions in Africa in terms of outreach. Our purpose is to verify if these institutions, which must constantly try to strike a compromise between their social role and financial sustainability are operationally efficient. Using data on African MFIs extracted from the Microfinance Information eXchange (MIX) database and covering the 2004 to 2006 periods, we find that more efficient MFIs are also the most profitable. This result is in line with the view that social performance is not in contradiction with the pursuit of excellent financial performance. Our results also show that large MFIs in terms of asset and those charging the highest fees are not necessarily the most efficient.Keywords: data envelopment analysis, microfinance institutions, quantile estimation of efficiency, social and financial performance
Procedia PDF Downloads 3052476 The New Propensity Score Method and Assessment of Propensity Score: A Simulation Study
Authors: Azam Najafkouchak, David Todem, Dorothy Pathak, Pramod Pathak, Joseph Gardiner
Abstract:
Propensity score (PS) methods have recently become the standard analysis tool for causal inference in observational studies where exposure is not randomly assigned. Thus, confounding can impact the estimation of treatment effect on the outcome. Due to the dangers of discretizing continuous variables, the focus of this paper will be on how the variation in cut-points or boundaries will affect the average treatment effect utilizing the stratification of the PS method. In this study, we will develop a new methodology to improve the efficiency of the PS analysis through stratification and simulation study. We will also explore the property of empirical distribution of average treatment effect theoretically, including asymptotic distribution, variance estimation and 95% confident Intervals.Keywords: propensity score, stratification, emprical distribution, average treatment effect
Procedia PDF Downloads 962475 Asymmetrical Informative Estimation for Macroeconomic Model: Special Case in the Tourism Sector of Thailand
Authors: Chukiat Chaiboonsri, Satawat Wannapan
Abstract:
This paper used an asymmetric informative concept to apply in the macroeconomic model estimation of the tourism sector in Thailand. The variables used to statistically analyze are Thailand international and domestic tourism revenues, the expenditures of foreign and domestic tourists, service investments by private sectors, service investments by the government of Thailand, Thailand service imports and exports, and net service income transfers. All of data is a time-series index which was observed between 2002 and 2015. Empirically, the tourism multiplier and accelerator were estimated by two statistical approaches. The first was the result of the Generalized Method of Moments model (GMM) based on the assumption which the tourism market in Thailand had perfect information (Symmetrical data). The second was the result of the Maximum Entropy Bootstrapping approach (MEboot) based on the process that attempted to deal with imperfect information and reduced uncertainty in data observations (Asymmetrical data). In addition, the tourism leakages were investigated by a simple model based on the injections and leakages concept. The empirical findings represented the parameters computed from the MEboot approach which is different from the GMM method. However, both of the MEboot estimation and GMM model suggests that Thailand’s tourism sectors are in a period capable of stimulating the economy.Keywords: TThailand tourism, Maximum Entropy Bootstrapping approach, macroeconomic model, asymmetric information
Procedia PDF Downloads 2932474 Performance Comparison of Wideband Covariance Matrix Sparse Representation (W-CMSR) with Other Wideband DOA Estimation Methods
Authors: Sandeep Santosh, O. P. Sahu
Abstract:
In this paper, performance comparison of wideband covariance matrix sparse representation (W-CMSR) method with other existing wideband Direction of Arrival (DOA) estimation methods has been made.W-CMSR relies less on a priori information of the incident signal number than the ordinary subspace based methods.Consider the perturbation free covariance matrix of the wideband array output. The diagonal covariance elements are contaminated by unknown noise variance. The covariance matrix of array output is conjugate symmetric i.e its upper right triangular elements can be represented by lower left triangular ones.As the main diagonal elements are contaminated by unknown noise variance,slide over them and align the lower left triangular elements column by column to obtain a measurement vector.Simulation results for W-CMSR are compared with simulation results of other wideband DOA estimation methods like Coherent signal subspace method (CSSM), Capon, l1-SVD, and JLZA-DOA. W-CMSR separate two signals very clearly and CSSM, Capon, L1-SVD and JLZA-DOA fail to separate two signals clearly and an amount of pseudo peaks exist in the spectrum of L1-SVD.Keywords: W-CMSR, wideband direction of arrival (DOA), covariance matrix, electrical and computer engineering
Procedia PDF Downloads 4692473 A Picture is worth a Billion Bits: Real-Time Image Reconstruction from Dense Binary Pixels
Authors: Tal Remez, Or Litany, Alex Bronstein
Abstract:
The pursuit of smaller pixel sizes at ever increasing resolution in digital image sensors is mainly driven by the stringent price and form-factor requirements of sensors and optics in the cellular phone market. Recently, Eric Fossum proposed a novel concept of an image sensor with dense sub-diffraction limit one-bit pixels (jots), which can be considered a digital emulation of silver halide photographic film. This idea has been recently embodied as the EPFL Gigavision camera. A major bottleneck in the design of such sensors is the image reconstruction process, producing a continuous high dynamic range image from oversampled binary measurements. The extreme quantization of the Poisson statistics is incompatible with the assumptions of most standard image processing and enhancement frameworks. The recently proposed maximum-likelihood (ML) approach addresses this difficulty, but suffers from image artifacts and has impractically high computational complexity. In this work, we study a variant of a sensor with binary threshold pixels and propose a reconstruction algorithm combining an ML data fitting term with a sparse synthesis prior. We also show an efficient hardware-friendly real-time approximation of this inverse operator. Promising results are shown on synthetic data as well as on HDR data emulated using multiple exposures of a regular CMOS sensor.Keywords: binary pixels, maximum likelihood, neural networks, sparse coding
Procedia PDF Downloads 200