Search results for: joint estimation
2223 Is Privatization Related with Macroeconomic Management? Evidence from Some Selected African Countries
Authors: E. O. George, P. Ojeaga, D. Odejimi, O. Mattehws
Abstract:
Has macroeconomic management succeeded in making privatization promote growth in Africa? What are the probable strategies that should accompany the privatization reform process to promote growth in Africa? To what extent has the privatization process succeeded in attracting foreign direct investment to Africa? The study investigates the relationship between macroeconomic management and privatization. Many African countries have embarked on one form of privatization reform or the other since 1980 as one of the stringent conditions for accessing capital from the IMF and the World Bank. Secondly globalization and the gradually integration of the African economy into the global economy also means that Africa has to strategically develop its domestic market to cushion itself from fluctuations and probable contagion associated with global economic crisis that are always inevitable Stiglitz. The methods of estimation used are the OLS, linear mixed effects (LME), 2SLS and the GMM method of estimation. It was found that macroeconomic management has the capacity to affect the success of the privatization reform process. It was also found that privatization was not promoting growth in Africa; privatization could promote growth if long run growth strategies are implemented together with the privatization reform process. Privatization was also found not to have the capacity to attract foreign investment to many African countries.Keywords: Africa, political economy, game theory, macroeconomic management and privatization
Procedia PDF Downloads 3282222 Inference for Compound Truncated Poisson Lognormal Model with Application to Maximum Precipitation Data
Authors: M. Z. Raqab, Debasis Kundu, M. A. Meraou
Abstract:
In this paper, we have analyzed maximum precipitation data during a particular period of time obtained from different stations in the Global Historical Climatological Network of the USA. One important point to mention is that some stations are shut down on certain days for some reason or the other. Hence, the maximum values are recorded by excluding those readings. It is assumed that the number of stations that operate follows zero-truncated Poisson random variables, and the daily precipitation follows a lognormal random variable. We call this model a compound truncated Poisson lognormal model. The proposed model has three unknown parameters, and it can take a variety of shapes. The maximum likelihood estimators can be obtained quite conveniently using Expectation-Maximization (EM) algorithm. Approximate maximum likelihood estimators are also derived. The associated confidence intervals also can be obtained from the observed Fisher information matrix. Simulation results have been performed to check the performance of the EM algorithm, and it is observed that the EM algorithm works quite well in this case. When we analyze the precipitation data set using the proposed model, it is observed that the proposed model provides a better fit than some of the existing models.Keywords: compound Poisson lognormal distribution, EM algorithm, maximum likelihood estimation, approximate maximum likelihood estimation, Fisher information, skew distribution
Procedia PDF Downloads 1072221 Estimation of Fragility Curves Using Proposed Ground Motion Selection and Scaling Procedure
Authors: Esra Zengin, Sinan Akkar
Abstract:
Reliable and accurate prediction of nonlinear structural response requires specification of appropriate earthquake ground motions to be used in nonlinear time history analysis. The current research has mainly focused on selection and manipulation of real earthquake records that can be seen as the most critical step in the performance based seismic design and assessment of the structures. Utilizing amplitude scaled ground motions that matches with the target spectra is commonly used technique for the estimation of nonlinear structural response. Representative ground motion ensembles are selected to match target spectrum such as scenario-based spectrum derived from ground motion prediction equations, Uniform Hazard Spectrum (UHS), Conditional Mean Spectrum (CMS) or Conditional Spectrum (CS). Different sets of criteria exist among those developed methodologies to select and scale ground motions with the objective of obtaining robust estimation of the structural performance. This study presents ground motion selection and scaling procedure that considers the spectral variability at target demand with the level of ground motion dispersion. The proposed methodology provides a set of ground motions whose response spectra match target median and corresponding variance within a specified period interval. The efficient and simple algorithm is used to assemble the ground motion sets. The scaling stage is based on the minimization of the error between scaled median and the target spectra where the dispersion of the earthquake shaking is preserved along the period interval. The impact of the spectral variability on nonlinear response distribution is investigated at the level of inelastic single degree of freedom systems. In order to see the effect of different selection and scaling methodologies on fragility curve estimations, results are compared with those obtained by CMS-based scaling methodology. The variability in fragility curves due to the consideration of dispersion in ground motion selection process is also examined.Keywords: ground motion selection, scaling, uncertainty, fragility curve
Procedia PDF Downloads 5822220 FPGA Based Vector Control of PM Motor Using Sliding Mode Observer
Authors: Hanan Mikhael Dawood, Afaneen Anwer Abood Al-Khazraji
Abstract:
The paper presents an investigation of field oriented control strategy of Permanent Magnet Synchronous Motor (PMSM) based on hardware in the loop simulation (HIL) over a wide speed range. A sensorless rotor position estimation using sliding mode observer for permanent magnet synchronous motor is illustrated considering the effects of magnetic saturation between the d and q axes. The cross saturation between d and q axes has been calculated by finite-element analysis. Therefore, the inductance measurement regards the saturation and cross saturation which are used to obtain the suitable id-characteristics in base and flux weakening regions. Real time matrix multiplication in Field Programmable Gate Array (FPGA) using floating point number system is used utilizing Quartus-II environment to develop FPGA designs and then download these designs files into development kit. dSPACE DS1103 is utilized for Pulse Width Modulation (PWM) switching and the controller. The hardware in the loop results conducted to that from the Matlab simulation. Various dynamic conditions have been investigated.Keywords: magnetic saturation, rotor position estimation, sliding mode observer, hardware in the loop (HIL)
Procedia PDF Downloads 5252219 Evaluation of Current Methods in Modelling and Analysis of Track with Jointed Rails
Authors: Hossein Askarinejad, Manicka Dhanasekar
Abstract:
In railway tracks, two adjacent rails are either welded or connected using bolted jointbars. In recent years the number of bolted rail joints is reduced by introduction of longer rail sections and by welding the rails at location of some joints. However, significant number of bolted rail joints remains in railways around the world as they are required to allow for rail thermal expansion or to provide electrical insulation in some sections of track. Regardless of the quality and integrity of the jointbar and bolt connections, the bending stiffness of jointbars is much lower than the rail generating large deflections under the train wheels. In addition, the gap or surface discontinuity on the rail running surface leads to generation of high wheel-rail impact force at the joint gap. These fundamental weaknesses have caused high rate of failure in track components at location of rail joints resulting in significant economic and safety issues in railways. The mechanical behavior of railway track at location of joints has not been fully understood due to various structural and material complexities. Although there have been some improvements in the methods for analysis of track at jointed rails in recent years, there are still uncertainties concerning the accuracy and reliability of the current methods. In this paper the current methods in analysis of track with a rail joint are critically evaluated and the new advances and recent research outcomes in this area are discussed. This research is part of a large granted project on rail joints which was defined by Cooperative Research Centre (CRC) for Rail Innovation with supports from Australian Rail Track Corporation (ARTC) and Queensland Rail (QR).Keywords: jointed rails, railway mechanics, track dynamics, wheel-rail interaction
Procedia PDF Downloads 3482218 Microvoid Growth in the Interfaces during Aging
Authors: Jae-Yong Park, Gwancheol Seo, Young-Ho Kim
Abstract:
Microvoids, sometimes called Kikendall voids, generally form in the interfaces between Sn-based solders and Cu and degrade the mechanical and electrical properties of the solder joints. The microvoid formation is known as the rapid interdiffusion between Sn and Cu and impurity content in the Cu. Cu electroplating from the acid solutions has been widely used by microelectronic packaging industry for both printed circuit board (PCB) and integrated circuit (IC) applications. The quality of electroplated Cu that can be optimized by the electroplating conditions is critical for the solder joint reliability. In this paper, the influence of electroplating conditions on the microvoid growth in the interfaces between Sn-3.0Ag-0.5Cu (SAC) solder and Cu layer was investigated during isothermal aging. The Cu layers were electroplated by controlling the additive of electroplating bath and current density to induce various microvoid densities. The electroplating bath consisted of sulfate, sulfuric acid, and additives and the current density of 5-15 mA/cm2 for each bath was used. After aging at 180 °C for up to 250 h, typical bi-layer of Cu6Sn5 and Cu3Sn intermetallic compounds (IMCs) was gradually growth at the SAC/Cu interface and microvoid density in the Cu3Sn showed disparities in the electroplating conditions. As the current density increased, the microvoid formation was accelerated in all electroplating baths. The higher current density induced, the higher impurity content in the electroplated Cu. When the polyethylene glycol (PEG) and Cl- ion were mixed in an electroplating bath, the microvoid formation was the highest compared to other electroplating baths. On the other hand, the overall IMC thickness was similar in all samples irrespective of the electroplating conditions. Impurity content in electroplated Cu influenced the microvoid growth, but the IMC growth was not affected by the impurity content. In conclusion, the electroplated conditions are properly optimized to avoid the excessive microvoid formation that results in brittle fracture of solder joint under high strain rate loading.Keywords: electroplating, additive, microvoid, intermetallic compound
Procedia PDF Downloads 2582217 Offline Parameter Identification and State-of-Charge Estimation for Healthy and Aged Electric Vehicle Batteries Based on the Combined Model
Authors: Xiaowei Zhang, Min Xu, Saeid Habibi, Fengjun Yan, Ryan Ahmed
Abstract:
Recently, Electric Vehicles (EVs) have received extensive consideration since they offer a more sustainable and greener transportation alternative compared to fossil-fuel propelled vehicles. Lithium-Ion (Li-ion) batteries are increasingly being deployed in EVs because of their high energy density, high cell-level voltage, and low rate of self-discharge. Since Li-ion batteries represent the most expensive component in the EV powertrain, accurate monitoring and control strategies must be executed to ensure their prolonged lifespan. The Battery Management System (BMS) has to accurately estimate parameters such as the battery State-of-Charge (SOC), State-of-Health (SOH), and Remaining Useful Life (RUL). In order for the BMS to estimate these parameters, an accurate and control-oriented battery model has to work collaboratively with a robust state and parameter estimation strategy. Since battery physical parameters, such as the internal resistance and diffusion coefficient change depending on the battery state-of-life (SOL), the BMS has to be adaptive to accommodate for this change. In this paper, an extensive battery aging study has been conducted over 12-months period on 5.4 Ah, 3.7 V Lithium polymer cells. Instead of using fixed charging/discharging aging cycles at fixed C-rate, a set of real-world driving scenarios have been used to age the cells. The test has been interrupted every 5% capacity degradation by a set of reference performance tests to assess the battery degradation and track model parameters. As battery ages, the combined model parameters are optimized and tracked in an offline mode over the entire batteries lifespan. Based on the optimized model, a state and parameter estimation strategy based on the Extended Kalman Filter (EKF) and the relatively new Smooth Variable Structure Filter (SVSF) have been applied to estimate the SOC at various states of life.Keywords: lithium-ion batteries, genetic algorithm optimization, battery aging test, parameter identification
Procedia PDF Downloads 2652216 Support Vector Machine Based Retinal Therapeutic for Glaucoma Using Machine Learning Algorithm
Authors: P. S. Jagadeesh Kumar, Mingmin Pan, Yang Yung, Tracy Lin Huan
Abstract:
Glaucoma is a group of visual maladies represented by the scheduled optic nerve neuropathy; means to the increasing dwindling in vision ground, resulting in loss of sight. In this paper, a novel support vector machine based retinal therapeutic for glaucoma using machine learning algorithm is conservative. The algorithm has fitting pragmatism; subsequently sustained on correlation clustering mode, it visualizes perfect computations in the multi-dimensional space. Support vector clustering turns out to be comparable to the scale-space advance that investigates the cluster organization by means of a kernel density estimation of the likelihood distribution, where cluster midpoints are idiosyncratic by the neighborhood maxima of the concreteness. The predicted planning has 91% attainment rate on data set deterrent on a consolidation of 500 realistic images of resolute and glaucoma retina; therefore, the computational benefit of depending on the cluster overlapping system pedestal on machine learning algorithm has complete performance in glaucoma therapeutic.Keywords: machine learning algorithm, correlation clustering mode, cluster overlapping system, glaucoma, kernel density estimation, retinal therapeutic
Procedia PDF Downloads 2502215 Structural Damage Detection in a Steel Column-Beam Joint Using Piezoelectric Sensors
Authors: Carlos H. Cuadra, Nobuhiro Shimoi
Abstract:
Application of piezoelectric sensors to detect structural damage due to seismic action on building structures is investigated. Plate-type piezoelectric sensor was developed and proposed for this task. A film-type piezoelectric sheet was attached on a steel plate and covered by a layer of glass. A special glue is used to fix the glass. This glue is a silicone that requires the application of ultraviolet rays for its hardening. Then, the steel plate was set up at a steel column-beam joint of a test specimen that was subjected to bending moment when test specimen is subjected to monotonic load and cyclic load. The structural behavior of test specimen during cyclic loading was verified using a finite element model, and it was found good agreement between both results on load-displacement characteristics. The cross section of steel elements (beam and column) is a box section of 100 mm×100 mm with a thin of 6 mm. This steel section is specified by the Japanese Industrial Standards as carbon steel square tube for general structure (STKR400). The column and beam elements are jointed perpendicularly using a fillet welding. The resulting test specimen has a T shape. When large deformation occurs the glass plate of the sensor device cracks and at that instant, the piezoelectric material emits a voltage signal which would be the indicator of a certain level of deformation or damage. Applicability of this piezoelectric sensor to detect structural damages was verified; however, additional analysis and experimental tests are required to establish standard parameters of the sensor system.Keywords: piezoelectric sensor, static cyclic test, steel structure, seismic damages
Procedia PDF Downloads 1212214 An Approach for Estimation in Hierarchical Clustered Data Applicable to Rare Diseases
Authors: Daniel C. Bonzo
Abstract:
Practical considerations lead to the use of unit of analysis within subjects, e.g., bleeding episodes or treatment-related adverse events, in rare disease settings. This is coupled with data augmentation techniques such as extrapolation to enlarge the subject base. In general, one can think about extrapolation of data as extending information and conclusions from one estimand to another estimand. This approach induces hierarchichal clustered data with varying cluster sizes. Extrapolation of clinical trial data is being accepted increasingly by regulatory agencies as a means of generating data in diverse situations during drug development process. Under certain circumstances, data can be extrapolated to a different population, a different but related indication, and different but similar product. We consider here the problem of estimation (point and interval) using a mixed-models approach under an extrapolation. It is proposed that estimators (point and interval) be constructed using weighting schemes for the clusters, e.g., equally weighted and with weights proportional to cluster size. Simulated data generated under varying scenarios are then used to evaluate the performance of this approach. In conclusion, the evaluation result showed that the approach is a useful means for improving statistical inference in rare disease settings and thus aids not only signal detection but risk-benefit evaluation as well.Keywords: clustered data, estimand, extrapolation, mixed model
Procedia PDF Downloads 1342213 Plot Scale Estimation of Crop Biophysical Parameters from High Resolution Satellite Imagery
Authors: Shreedevi Moharana, Subashisa Dutta
Abstract:
The present study focuses on the estimation of crop biophysical parameters like crop chlorophyll, nitrogen and water stress at plot scale in the crop fields. To achieve these, we have used high-resolution satellite LISS IV imagery. A new methodology has proposed in this research work, the spectral shape function of paddy crop is employed to get the significant wavelengths sensitive to paddy crop parameters. From the shape functions, regression index models were established for the critical wavelength with minimum and maximum wavelengths of multi-spectrum high-resolution LISS IV data. Moreover, the functional relationships were utilized to develop the index models. From these index models crop, biophysical parameters were estimated and mapped from LISS IV imagery at plot scale in crop field level. The result showed that the nitrogen content of the paddy crop varied from 2-8%, chlorophyll from 1.5-9% and water content variation observed from 40-90% respectively. It was observed that the variability in rice agriculture system in India was purely a function of field topography.Keywords: crop parameters, index model, LISS IV imagery, plot scale, shape function
Procedia PDF Downloads 1662212 Effect of Shot Peening on the Mechanical Properties for Welded Joints of Aluminium Alloy 6061-T6
Authors: Muna Khethier Abbass, Khairia Salman Hussan, Huda Mohummed AbdudAlaziz
Abstract:
This work aims to study the effect of shot peening on the mechanical properties of welded joints which performed by two different welding processes: Tungsten inert gas (TIG) welding and friction stir welding (FSW) processes of aluminum alloy 6061 T6. Arc welding process (TIG) was carried out on the sheet with dimensions of (100x50x6 mm) to obtain many welded joints with using electrode type ER4043 (AlSi5) as a filler metal and argon as shielding gas. While the friction stir welding process was carried out using CNC milling machine with a tool of rotational speed (1000 rpm) and welding speed of (20 mm/min) to obtain the same butt welded joints. The welded pieces were tested by X-ray radiography to detect the internal defects and faulty welded pieces were excluded. Tensile test specimens were prepared from welded joints and base alloy in the dimensions according to ASTM17500 and then subjected to shot peening process using steel ball of diameter 0.9 mm and for 15 min. All specimens were subjected to Vickers hardness test and micro structure examination to study the effect of welding process (TIG and FSW) on the micro structure of the weld zones. Results showed that a general decay of mechanical properties of TIG and FSW welded joints comparing with base alloy while the FSW welded joint gives better mechanical properties than that of TIG welded joint. This is due to the micro structure changes during the welding process. It has been found that the surface hardening by shot peening improved the mechanical properties of both welded joints, this is due to the compressive residual stress generation in the weld zones which was measured using X-Ray diffraction (XRD) inspection.Keywords: friction stir welding, TIG welding, mechanical properties, shot peening
Procedia PDF Downloads 3372211 Games behind Bars: A Longitudinal Study of Inmates Pro-Social Preferences
Authors: Mario A. Maggioni, Domenico Rossignoli, Simona Beretta, Sara Balestri
Abstract:
The paper presents the results of a Longitudinal Randomized Control Trial implemented in 2016 two State Prisons in California (USA). The subjects were randomly assigned to a 10-months program (GRIP, Guiding Rage Into Power) aiming at undoing the destructive behavioral patterns that lead to criminal actions by raising the individual’s 'mindfulness'. This study tests whether the participation to this program (treatment), based on strong relationships and mutual help, affects pro-social behavior of participants, in particular with reference to trust and inequality aversion. The research protocol entails the administration of two questionnaires including a set of behavioral situations ('games') - widely used in the relevant literature in the field - to 80 inmates, 42 treated (enrolled in the program) and 38 controls. The first questionnaire has been administered before treatment and randomization took place; the second questionnaire at the end of the program. The results of a Difference-in-Differences estimation procedure, show that trust significantly increases GRIP participants to compared to the control group. The result is robust to alternative estimation techniques and to the inclusion of a set of covariates to further control for idiosyncratic characteristics of the prisoners.Keywords: behavioral economics, difference in differences, longitudinal study, pro-social preferences
Procedia PDF Downloads 3932210 Evaluation of Expected Annual Loss Probabilities of RC Moment Resisting Frames
Authors: Saemee Jun, Dong-Hyeon Shin, Tae-Sang Ahn, Hyung-Joon Kim
Abstract:
Building loss estimation methodologies which have been advanced considerably in recent decades are usually used to estimate socio and economic impacts resulting from seismic structural damage. In accordance with these methods, this paper presents the evaluation of an annual loss probability of a reinforced concrete moment resisting frame designed according to Korean Building Code. The annual loss probability is defined by (1) a fragility curve obtained from a capacity spectrum method which is similar to a method adopted from HAZUS, and (2) a seismic hazard curve derived from annual frequencies of exceedance per peak ground acceleration. Seismic fragilities are computed to calculate the annual loss probability of a certain structure using functions depending on structural capacity, seismic demand, structural response and the probability of exceeding damage state thresholds. This study carried out a nonlinear static analysis to obtain the capacity of a RC moment resisting frame selected as a prototype building. The analysis results show that the probability of being extensive structural damage in the prototype building is expected to 0.004% in a year.Keywords: expected annual loss, loss estimation, RC structure, fragility analysis
Procedia PDF Downloads 3962209 High Speed Motion Tracking with Magnetometer in Nonuniform Magnetic Field
Authors: Jeronimo Cox, Tomonari Furukawa
Abstract:
Magnetometers have become more popular in inertial measurement units (IMU) for their ability to correct estimations using the earth's magnetic field. Accelerometer and gyroscope-based packages fail with dead-reckoning errors accumulated over time. Localization in robotic applications with magnetometer-inclusive IMUs has become popular as a way to track the odometry of slower-speed robots. With high-speed motions, the accumulated error increases over smaller periods of time, making them difficult to track with IMU. Tracking a high-speed motion is especially difficult with limited observability. Visual obstruction of motion leaves motion-tracking cameras unusable. When motions are too dynamic for estimation techniques reliant on the observability of the gravity vector, the use of magnetometers is further justified. As available magnetometer calibration methods are limited with the assumption that background magnetic fields are uniform, estimation in nonuniform magnetic fields is problematic. Hard iron distortion is a distortion of the magnetic field by other objects that produce magnetic fields. This kind of distortion is often observed as the offset from the origin of the center of data points when a magnetometer is rotated. The magnitude of hard iron distortion is dependent on proximity to distortion sources. Soft iron distortion is more related to the scaling of the axes of magnetometer sensors. Hard iron distortion is more of a contributor to the error of attitude estimation with magnetometers. Indoor environments or spaces inside ferrite-based structures, such as building reinforcements or a vehicle, often cause distortions with proximity. As positions correlate to areas of distortion, methods of magnetometer localization include the production of spatial mapping of magnetic field and collection of distortion signatures to better aid location tracking. The goal of this paper is to compare magnetometer methods that don't need pre-productions of magnetic field maps. Mapping the magnetic field in some spaces can be costly and inefficient. Dynamic measurement fusion is used to track the motion of a multi-link system with us. Conventional calibration by data collection of rotation at a static point, real-time estimation of calibration parameters each time step, and using two magnetometers for determining local hard iron distortion are compared to confirm the robustness and accuracy of each technique. With opposite-facing magnetometers, hard iron distortion can be accounted for regardless of position, Rather than assuming that hard iron distortion is constant regardless of positional change. The motion measured is a repeatable planar motion of a two-link system connected by revolute joints. The links are translated on a moving base to impulse rotation of the links. Equipping the joints with absolute encoders and recording the motion with cameras to enable ground truth comparison to each of the magnetometer methods. While the two-magnetometer method accounts for local hard iron distortion, the method fails where the magnetic field direction in space is inconsistent.Keywords: motion tracking, sensor fusion, magnetometer, state estimation
Procedia PDF Downloads 832208 Runoff Estimation Using NRCS-CN Method
Authors: E. K. Naseela, B. M. Dodamani, Chaithra Chandran
Abstract:
The GIS and remote sensing techniques facilitate accurate estimation of surface runoff from watershed. In the present study an attempt has been made to evaluate the applicability of Natural Resources Service Curve Number method using GIS and Remote sensing technique in the upper Krishna basin (69,425 Sq.km). Landsat 7 (with resolution 30 m) satellite data for the year 2012 has been used for the preparation of land use land cover (LU/LC) map. The hydrologic soil group is mapped using GIS platform. The weighted curve numbers (CN) for all the 5 subcatchments calculated on the basis of LU/LC type and hydrologic soil class in the area by considering antecedent moisture condition. Monthly rainfall data was available for 58 raingauge stations. Overlay technique is adopted for generating weighted curve number. Results of the study show that land use changes determined from satellite images are useful in studying the runoff response of the basin. The results showed that there is no significant difference between observed and estimated runoff depths. For each subcatchment, statistically positive correlations were detected between observed and estimated runoff depth (0.62207 Multivariate Control Chart to Determine Efficiency Measurements in Industrial Processes
Authors: J. J. Vargas, N. Prieto, L. A. Toro
Abstract:
Control charts are commonly used to monitor processes involving either variable or attribute of quality characteristics and determining the control limits as a critical task for quality engineers to improve the processes. Nonetheless, in some applications it is necessary to include an estimation of efficiency. In this paper, the ability to define the efficiency of an industrial process was added to a control chart by means of incorporating a data envelopment analysis (DEA) approach. In depth, a Bayesian estimation was performed to calculate the posterior probability distribution of parameters as means and variance and covariance matrix. This technique allows to analyse the data set without the need of using the hypothetical large sample implied in the problem and to be treated as an approximation to the finite sample distribution. A rejection simulation method was carried out to generate random variables from the parameter functions. Each resulting vector was used by stochastic DEA model during several cycles for establishing the distribution of each efficiency measures for each DMU (decision making units). A control limit was calculated with model obtained and if a condition of a low level efficiency of DMU is presented, system efficiency is out of control. In the efficiency calculated a global optimum was reached, which ensures model reliability.Keywords: data envelopment analysis, DEA, Multivariate control chart, rejection simulation method
Procedia PDF Downloads 3722206 A Computational Framework for Load Mediated Patellar Ligaments Damage at the Tropocollagen Level
Authors: Fadi Al Khatib, Raouf Mbarki, Malek Adouni
Abstract:
In various sport and recreational activities, the patellofemoral joint undergoes large forces and moments while accommodating the significant knee joint movement. In doing so, this joint is commonly the source of anterior knee pain related to instability in normal patellar tracking and excessive pressure syndrome. One well-observed explanation of the instability of the normal patellar tracking is the patellofemoral ligaments and patellar tendon damage. Improved knowledge of the damage mechanism mediating ligaments and tendon injuries can be a great help not only in rehabilitation and prevention procedures but also in the design of better reconstruction systems in the management of knee joint disorders. This damage mechanism, specifically due to excessive mechanical loading, has been linked to the micro level of the fibred structure precisely to the tropocollagen molecules and their connection density. We argue defining a clear frame starting from the bottom (micro level) to up (macro level) in the hierarchies of the soft tissue may elucidate the essential underpinning on the state of the ligaments damage. To do so, in this study a multiscale fibril reinforced hyper elastoplastic Finite Element model that accounts for the synergy between molecular and continuum syntheses was developed to determine the short-term stresses/strains patellofemoral ligaments and tendon response. The plasticity of the proposed model is associated only with the uniaxial deformation of the collagen fibril. The yield strength of the fibril is a function of the cross-link density between tropocollagen molecules, defined here by a density function. This function obtained through a Coarse-graining procedure linking nanoscale collagen features and the tissue level materials properties using molecular dynamics simulations. The hierarchies of the soft tissues were implemented using the rule of mixtures. Thereafter, the model was calibrated using a statistical calibration procedure. The model then implemented into a real structure of patellofemoral ligaments and patellar tendon (OpenKnee) and simulated under realistic loading conditions. With the calibrated material parameters the calculated axial stress lies well with the experimental measurement with a coefficient of determination (R2) equal to 0.91 and 0.92 for the patellofemoral ligaments and the patellar tendon respectively. The ‘best’ prediction of the yielding strength and strain as compared with the reported experimental data yielded when the cross-link density between the tropocollagen molecule of the fibril equal to 5.5 ± 0.5 (patellofemoral ligaments) and 12 (patellar tendon). Damage initiation of the patellofemoral ligaments was located at the femoral insertions while the damage of the patellar tendon happened in the middle of the structure. These predicted finding showed a meaningful correlation between the cross-link density of the tropocollagen molecules and the stiffness of the connective tissues of the extensor mechanism. Also, damage initiation and propagation were documented with this model, which were in satisfactory agreement with earlier observation. To the best of our knowledge, this is the first attempt to model ligaments from the bottom up, predicted depending to the tropocollagen cross-link density. This approach appears more meaningful towards a realistic simulation of a damaging process or repair attempt compared with certain published studies.Keywords: tropocollagen, multiscale model, fibrils, knee ligaments
Procedia PDF Downloads 1272205 Deep Reinforcement Learning-Based Computation Offloading for 5G Vehicle-Aware Multi-Access Edge Computing Network
Authors: Ziying Wu, Danfeng Yan
Abstract:
Multi-Access Edge Computing (MEC) is one of the key technologies of the future 5G network. By deploying edge computing centers at the edge of wireless access network, the computation tasks can be offloaded to edge servers rather than the remote cloud server to meet the requirements of 5G low-latency and high-reliability application scenarios. Meanwhile, with the development of IOV (Internet of Vehicles) technology, various delay-sensitive and compute-intensive in-vehicle applications continue to appear. Compared with traditional internet business, these computation tasks have higher processing priority and lower delay requirements. In this paper, we design a 5G-based Vehicle-Aware Multi-Access Edge Computing Network (VAMECN) and propose a joint optimization problem of minimizing total system cost. In view of the problem, a deep reinforcement learning-based joint computation offloading and task migration optimization (JCOTM) algorithm is proposed, considering the influences of multiple factors such as concurrent multiple computation tasks, system computing resources distribution, and network communication bandwidth. And, the mixed integer nonlinear programming problem is described as a Markov Decision Process. Experiments show that our proposed algorithm can effectively reduce task processing delay and equipment energy consumption, optimize computing offloading and resource allocation schemes, and improve system resource utilization, compared with other computing offloading policies.Keywords: multi-access edge computing, computation offloading, 5th generation, vehicle-aware, deep reinforcement learning, deep q-network
Procedia PDF Downloads 1162204 An Approach to Apply Kernel Density Estimation Tool for Crash Prone Location Identification
Authors: Kazi Md. Shifun Newaz, S. Miaji, Shahnewaz Hazanat-E-Rabbi
Abstract:
In this study, the kernel density estimation tool has been used to identify most crash prone locations in a national highway of Bangladesh. Like other developing countries, in Bangladesh road traffic crashes (RTC) have now become a great social alarm and the situation is deteriorating day by day. Today’s black spot identification process is not based on modern technical tools and most of the cases provide wrong output. In this situation, characteristic analysis and black spot identification by spatial analysis would be an effective and low cost approach in ensuring road safety. The methodology of this study incorporates a framework on the basis of spatial-temporal study to identify most RTC occurrence locations. In this study, a very important and economic corridor like Dhaka to Sylhet highway has been chosen to apply the method. This research proposes that KDE method for identification of Hazardous Road Location (HRL) could be used for all other National highways in Bangladesh and also for other developing countries. Some recommendations have been suggested for policy maker to reduce RTC in Dhaka-Sylhet especially in black spots.Keywords: hazardous road location (HRL), crash, GIS, kernel density
Procedia PDF Downloads 3122203 Relocation of Plastic Hinge of Interior Beam Column Connections with Intermediate Bars in Reinforced Concrete and T-Section Steel Inserts in Precast Concrete Frames
Authors: P. Wongmatar, C. Hansapinyo, C. Buachart
Abstract:
Failure of typical seismic frames has been found by plastic hinge occurring on beams section near column faces. Past researches shown that the seismic capacity of the frames can be enhanced if the plastic hinges of the beams are shifted away from the column faces. This paper presents detailing of reinforcements in the interior beam–column connections aiming to relocate the plastic hinge of reinforced concrete and precast concrete frames. Four specimens were tested under quasi-static cyclic load including two monolithic specimens and two precast specimens. For one monolithic specimen, typical seismic reinforcement was provided and considered as a reference specimen named M1. The other reinforced concrete frame M2 contained additional intermediate steel in the connection area compared with the specimen M1. For the precast specimens, embedded T-section steels in joint were provided, with and without diagonal bars in the connection area for specimen P1 and P2, respectively. The test results indicated the ductile failure with beam flexural failure in monolithic specimen M1 and the intermediate steel increased strength and improved joint performance of specimen M2. For the precast specimens, cracks generated at the end of the steel inserts. However, slipping of reinforcing steel lapped in top of the beams was seen before yielding of the main bars leading to the brittle failure. The diagonal bars in precast specimens P2 improved the connection stiffness and the energy dissipation capacity.Keywords: relocation, plastic hinge, intermediate bar, T-section steel, precast concrete frame
Procedia PDF Downloads 2722202 Estimating View-Through Ad Attribution from User Surveys Using Convex Optimization
Authors: Yuhan Lin, Rohan Kekatpure, Cassidy Yeung
Abstract:
In Digital Marketing, robust quantification of View-through attribution (VTA) is necessary for evaluating channel effectiveness. VTA occurs when a product purchase is aided by an Ad but without an explicit click (e.g. a TV ad). A lack of a tracking mechanism makes VTA estimation challenging. Most prevalent VTA estimation techniques rely on post-purchase in-product user surveys. User surveys enable the calculation of channel multipliers, which are the ratio of the view-attributed to the click-attributed purchases of each marketing channel. Channel multipliers thus provide a way to estimate the unknown VTA for a channel from its known click attribution. In this work, we use Convex Optimization to compute channel multipliers in a way that enables a mathematical encoding of the expected channel behavior. Large fluctuations in channel attributions often result from overfitting the calculations to user surveys. Casting channel attribution as a Convex Optimization problem allows an introduction of constraints that limit such fluctuations. The result of our study is a distribution of channel multipliers across the entire marketing funnel, with important implications for marketing spend optimization. Our technique can be broadly applied to estimate Ad effectiveness in a privacy-centric world that increasingly limits user tracking.Keywords: digital marketing, survey analysis, operational research, convex optimization, channel attribution
Procedia PDF Downloads 1962201 The Response of the Central Bank to the Exchange Rate Movement: A Dynamic Stochastic General Equilibrium-Vector Autoregressive Approach for Tunisian Economy
Authors: Abdelli Soulaima, Belhadj Besma
Abstract:
The paper examines the choice of the central bank toward the movements of the nominal exchange rate and evaluates its effects on the volatility of the output growth and the inflation. The novel hybrid method of the dynamic stochastic general equilibrium called the DSGE-VAR is proposed for analyzing this policy experiment in a small scale open economy in particular Tunisia. The contribution is provided to the empirical literature as we apply the Tunisian data with this model, which is rarely used in this context. Note additionally that the issue of treating the degree of response of the central bank to the exchange rate in Tunisia is special. To ameliorate the estimation, the Bayesian technique is carried out for the sample 1980:q1 to 2011 q4. Our results reveal that the central bank should not react or softly react to the exchange rate. The variance decomposition displayed that the overall inflation volatility is more pronounced with the fixed exchange rate regime for most of the shocks except for the productivity and the interest rate. The output volatility is also higher with this regime with the majority of the shocks exempting the foreign interest rate and the interest rate shocks.Keywords: DSGE-VAR modeling, exchange rate, monetary policy, Bayesian estimation
Procedia PDF Downloads 2932200 Predictive Analytics in Traffic Flow Management: Integrating Temporal Dynamics and Traffic Characteristics to Estimate Travel Time
Authors: Maria Ezziani, Rabie Zine, Amine Amar, Ilhame Kissani
Abstract:
This paper introduces a predictive model for urban transportation engineering, which is vital for efficient traffic management. Utilizing comprehensive datasets and advanced statistical techniques, the model accurately forecasts travel times by considering temporal variations and traffic dynamics. Machine learning algorithms, including regression trees and neural networks, are employed to capture sequential dependencies. Results indicate significant improvements in predictive accuracy, particularly during peak hours and holidays, with the incorporation of traffic flow and speed variables. Future enhancements may integrate weather conditions and traffic incidents. The model's applications range from adaptive traffic management systems to route optimization algorithms, facilitating congestion reduction and enhancing journey reliability. Overall, this research extends beyond travel time estimation, offering insights into broader transportation planning and policy-making realms, empowering stakeholders to optimize infrastructure utilization and improve network efficiency.Keywords: predictive analytics, traffic flow, travel time estimation, urban transportation, machine learning, traffic management
Procedia PDF Downloads 822199 Dual-Channel Multi-Band Spectral Subtraction Algorithm Dedicated to a Bilateral Cochlear Implant
Authors: Fathi Kallel, Ahmed Ben Hamida, Christian Berger-Vachon
Abstract:
In this paper, a Speech Enhancement Algorithm based on Multi-Band Spectral Subtraction (MBSS) principle is evaluated for Bilateral Cochlear Implant (BCI) users. Specifically, dual-channel noise power spectral estimation algorithm using Power Spectral Densities (PSD) and Cross Power Spectral Densities (CPSD) of the observed signals is studied. The enhanced speech signal is obtained using Dual-Channel Multi-Band Spectral Subtraction ‘DC-MBSS’ algorithm. For performance evaluation, objective speech assessment test relying on Perceptual Evaluation of Speech Quality (PESQ) score is performed to fix the optimal number of frequency bands needed in DC-MBSS algorithm. In order to evaluate the speech intelligibility, subjective listening tests are assessed with 3 deafened BCI patients. Experimental results obtained using French Lafon database corrupted by an additive babble noise at different Signal-to-Noise Ratios (SNR) showed that DC-MBSS algorithm improves speech understanding for single and multiple interfering noise sources.Keywords: speech enhancement, spectral substracion, noise estimation, cochlear impalnt
Procedia PDF Downloads 5462198 Parameter Estimation for Contact Tracing in Graph-Based Models
Authors: Augustine Okolie, Johannes Müller, Mirjam Kretzchmar
Abstract:
We adopt a maximum-likelihood framework to estimate parameters of a stochastic susceptible-infected-recovered (SIR) model with contact tracing on a rooted random tree. Given the number of detectees per index case, our estimator allows to determine the degree distribution of the random tree as well as the tracing probability. Since we do not discover all infectees via contact tracing, this estimation is non-trivial. To keep things simple and stable, we develop an approximation suited for realistic situations (contract tracing probability small, or the probability for the detection of index cases small). In this approximation, the only epidemiological parameter entering the estimator is the basic reproduction number R0. The estimator is tested in a simulation study and applied to covid-19 contact tracing data from India. The simulation study underlines the efficiency of the method. For the empirical covid-19 data, we are able to compare different degree distributions and perform a sensitivity analysis. We find that particularly a power-law and a negative binomial degree distribution meet the data well and that the tracing probability is rather large. The sensitivity analysis shows no strong dependency on the reproduction number.Keywords: stochastic SIR model on graph, contact tracing, branching process, parameter inference
Procedia PDF Downloads 752197 ‘Groupitizing’ – A Key Factor in Math Learning Disabilities
Authors: Michal Wolk, Bat-Sheva Hadad, Orly Rubinsten
Abstract:
Objective: The visuospatial perception system process that allows us to decompose and recompose small quantities into a whole is often called “groupitizing.” Previous studies have been found that adults use groupitizing processes in quantity estimation tasks and link this ability of subgroups recognition to arithmetic proficiency. This pilot study examined if adults with math difficulties benefit from visuospatial grouping cues when asked to estimate the quantity of a given set. It also compared the tipping point in which a significant improvement occurs in adults with typical development compared to adults with math difficulties. Method: In this pilot research, we recruited adults with low arithmetic abilities and matched controls. Participants were asked to estimate the quantity of a given set. Different grouping cues were displayed (space, color, or none) with different visual configurations (different quantities-different shapes, same quantities- different shapes, same quantities- same shapes). Results: Both groups showed significant performance improvement when grouping cues appeared. However, adults with low arithmetic abilities benefited from the grouping cues already in very small quantities as four. Conclusion: impaired perceptual groupitizing abilities may be a characteristic of low arithmetic abilities.Keywords: groupitizing, math learning disability, quantity estimation, visual perception system
Procedia PDF Downloads 2022196 Scour Depth Prediction around Bridge Piers Using Neuro-Fuzzy and Neural Network Approaches
Authors: H. Bonakdari, I. Ebtehaj
Abstract:
The prediction of scour depth around bridge piers is frequently considered in river engineering. One of the key aspects in efficient and optimum bridge structure design is considered to be scour depth estimation around bridge piers. In this study, scour depth around bridge piers is estimated using two methods, namely the Adaptive Neuro-Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN). Therefore, the effective parameters in scour depth prediction are determined using the ANN and ANFIS methods via dimensional analysis, and subsequently, the parameters are predicted. In the current study, the methods’ performances are compared with the nonlinear regression (NLR) method. The results show that both methods presented in this study outperform existing methods. Moreover, using the ratio of pier length to flow depth, ratio of median diameter of particles to flow depth, ratio of pier width to flow depth, the Froude number and standard deviation of bed grain size parameters leads to optimal performance in scour depth estimation.Keywords: adaptive neuro-fuzzy inference system (ANFIS), artificial neural network (ANN), bridge pier, scour depth, nonlinear regression (NLR)
Procedia PDF Downloads 2172195 Relationships Between the Petrophysical and Mechanical Properties of Rocks and Shear Wave Velocity
Authors: Anamika Sahu
Abstract:
The Himalayas, like many mountainous regions, is susceptible to multiple hazards. In recent times, the frequency of such disasters is continuously increasing due to extreme weather phenomena. These natural hazards are responsible for irreparable human and economic loss. The Indian Himalayas has repeatedly been ruptured by great earthquakes in the past and has the potential for a future large seismic event as it falls under the seismic gap. Damages caused by earthquakes are different in different localities. It is well known that, during earthquakes, damage to the structure is associated with the subsurface conditions and the quality of construction materials. So, for sustainable mountain development, prior estimation of site characterization will be valuable for designing and constructing the space area and for efficient mitigation of the seismic risk. Both geotechnical and geophysical investigation of the subsurface is required to describe the subsurface complexity. In mountainous regions, geophysical methods are gaining popularity as areas can be studied without disturbing the ground surface, and also these methods are time and cost-effective. The MASW method is used to calculate the Vs30. Vs30 is the average shear wave velocity for the top 30m of soil. Shear wave velocity is considered the best stiffness indicator, and the average of shear wave velocity up to 30 m is used in National Earthquake Hazards Reduction Program (NEHRP) provisions (BSSC,1994) and Uniform Building Code (UBC), 1997 classification. Parameters obtained through geotechnical investigation have been integrated with findings obtained through the subsurface geophysical survey. Joint interpretation has been used to establish inter-relationships among mineral constituents, various textural parameters, and unconfined compressive strength (UCS) with shear wave velocity. It is found that results obtained through the MASW method fitted well with the laboratory test. In both conditions, mineral constituents and textural parameters (grain size, grain shape, grain orientation, and degree of interlocking) control the petrophysical and mechanical properties of rocks and the behavior of shear wave velocity.Keywords: MASW, mechanical, petrophysical, site characterization
Procedia PDF Downloads 832194 Real-Time Classification of Hemodynamic Response by Functional Near-Infrared Spectroscopy Using an Adaptive Estimation of General Linear Model Coefficients
Authors: Sahar Jahani, Meryem Ayse Yucel, David Boas, Seyed Kamaledin Setarehdan
Abstract:
Near-infrared spectroscopy allows monitoring of oxy- and deoxy-hemoglobin concentration changes associated with hemodynamic response function (HRF). HRF is usually affected by natural physiological hemodynamic (systemic interferences) which occur in all body tissues including brain tissue. This makes HRF extraction a very challenging task. In this study, we used Kalman filter based on a general linear model (GLM) of brain activity to define the proportion of systemic interference in the brain hemodynamic. The performance of the proposed algorithm is evaluated in terms of the peak to peak error (Ep), mean square error (MSE), and Pearson’s correlation coefficient (R2) criteria between the estimated and the simulated hemodynamic responses. This technique also has the ability of real time estimation of single trial functional activations as it was applied to classify finger tapping versus resting state. The average real-time classification accuracy of 74% over 11 subjects demonstrates the feasibility of developing an effective functional near infrared spectroscopy for brain computer interface purposes (fNIRS-BCI).Keywords: hemodynamic response function, functional near-infrared spectroscopy, adaptive filter, Kalman filter
Procedia PDF Downloads 160