Search results for: model errors
17404 The Effectiveness and Accuracy of the Schulte Holt IOL Toric Calculator Processor in Comparison to Manually Input Data into the Barrett Toric IOL Calculator
Authors: Gabrielle Holt
Abstract:
This paper is looking to prove the efficacy of the Schulte Holt IOL Toric Calculator Processor (Schulte Holt ITCP). It has been completed using manually inputted data into the Barrett Toric Calculator and comparing the number of minutes taken to complete the Toric calculations, the number of errors identified during completion, and distractions during completion. It will then compare that data to the number of minutes taken for the Schulte Holt ITCP to complete also, using the Barrett method, as well as the number of errors identified in the Schulte Holt ITCP. The data clearly demonstrate a momentous advantage to the Schulte Holt ITCP and notably reduces time spent doing Toric Calculations, as well as reducing the number of errors. With the ever-growing number of cataract surgeries taking place around the world and the waitlists increasing -the Schulte Holt IOL Toric Calculator Processor may well demonstrate a way forward to increase the availability of ophthalmologists and ophthalmic staff while maintaining patient safety.Keywords: Toric, toric lenses, ophthalmology, cataract surgery, toric calculations, Barrett
Procedia PDF Downloads 9417403 Fault Diagnosis in Induction Motor
Authors: Kirti Gosavi, Anita Bhole
Abstract:
The paper demonstrates simulation and steady-state performance of three phase squirrel cage induction motor and detection of rotor broken bar fault using MATLAB. This simulation model is successfully used in the fault detection of rotor broken bar for the induction machines. A dynamic model using PWM inverter and mathematical modelling of the motor is developed. The dynamic simulation of the small power induction motor is one of the key steps in the validation of the design process of the motor drive system and it is needed for eliminating advertent design errors and the resulting error in the prototype construction and testing. The simulation model will be helpful in detecting the faults in three phase induction motor using Motor current signature analysis.Keywords: squirrel cage induction motor, pulse width modulation (PWM), fault diagnosis, induction motor
Procedia PDF Downloads 63317402 Determining Components of Deflection of the Vertical in Owerri West Local Government, Imo State Nigeria Using Least Square Method
Authors: Chukwu Fidelis Ndubuisi, Madufor Michael Ozims, Asogwa Vivian Ndidiamaka, Egenamba Juliet Ngozi, Okonkwo Stephen C., Kamah Chukwudi David
Abstract:
Deflection of the vertical is a quantity used in reducing geodetic measurements related to geoidal networks to the ellipsoidal plane; and it is essential in Geoid modeling processes. Computing the deflection of the vertical component of a point in a given area is necessary in evaluating the standard errors along north-south and east-west direction. Using combined approach for the determination of deflection of the vertical component provides improved result but labor intensive without appropriate method. Least square method is a method that makes use of redundant observation in modeling a given sets of problem that obeys certain geometric condition. This research work is aimed to computing the deflection of vertical component of Owerri West local government area of Imo State using geometric method as field technique. In this method combination of Global Positioning System on static mode and precise leveling observation were utilized in determination of geodetic coordinate of points established within the study area by GPS observation and the orthometric heights through precise leveling. By least square using Matlab programme; the estimated deflections of vertical component parameters for the common station were -0.0286 and -0.0001 arc seconds for the north-south and east-west components respectively. The associated standard errors of the processed vectors of the network were computed. The computed standard errors of the North-south and East-west components were 5.5911e-005 and 1.4965e-004 arc seconds, respectively. Therefore, including the derived component of deflection of the vertical to the ellipsoidal model will yield high observational accuracy since an ellipsoidal model is not tenable due to its far observational error in the determination of high quality job. It is important to include the determined deflection of the vertical component for Owerri West Local Government in Imo State, Nigeria.Keywords: deflection of vertical, ellipsoidal height, least square, orthometric height
Procedia PDF Downloads 21317401 Development of Visual Working Memory Precision: A Cross-Sectional Study of Simultaneously Delayed Responses Paradigm
Authors: Yao Fu, Xingli Zhang, Jiannong Shi
Abstract:
Visual working memory (VWM) capacity is the ability to maintain and manipulate short-term information which is not currently available. It is well known for its significance to form the basis of numerous cognitive abilities and its limitation in holding information. VWM span, the most popular measurable indicator, is found to reach the adult level (3-4 items) around 12-13 years’ old, while less is known about the precision development of the VWM capacity. By using simultaneously delayed responses paradigm, the present study investigates the development of VWM precision among 6-18-year-old children and young adults, besides its possible relationships with fluid intelligence and span. Results showed that precision and span both increased with age, and precision reached the maximum in 16-17 age-range. Moreover, when remembering 3 simultaneously presented items, the probability of remembering target item correlated with fluid intelligence and the probability of wrap errors (misbinding target and non-target items) correlated with age. When remembering more items, children had worse performance than adults due to their wrap errors. Compared to span, VWM precision was effective predictor of intelligence even after controlling for age. These results suggest that unlike VWM span, precision developed in a slow, yet longer fashion. Moreover, decreasing probability of wrap errors might be the main reason for the development of precision. Last, precision correlated more closely with intelligence than span in childhood and adolescence, which might be caused by the probability of remembering target item.Keywords: fluid intelligence, precision, visual working memory, wrap errors
Procedia PDF Downloads 27717400 Spelling Errors in Persian Children with Developmental Dyslexia
Authors: Mohammad Haghighi, Amineh Akhondi, Leila Jahangard, Mohammad Ahmadpanah, Masoud Ansari
Abstract:
Background: According to the recent estimation, approximately 4%-12% percent of Iranians have difficulty in learning to read and spell possibly as a result of developmental dyslexia. The study was planned to investigate spelling error patterns among Persian children with developmental dyslexia and compare that with the errors exhibited by control groups Participants: 90 students participated in this study. 30 students from Grade level five, diagnosed as dyslexics by professionals, 30 normal 5th Grade readers and 30 younger normal readers. There were 15 boys and 15 girls in each of the groups. Qualitative and quantitative methods for analysis of errors were used. Results and conclusion: results of this study indicate similar spelling error profiles among dyslexics and the reading level matched groups, and these profiles were different from age-matched group. However, performances of dyslexic group and reading level matched group were different and inconsistent in some cases.Keywords: spelling, error types, developmental dyslexia, Persian, writing system, learning disabilities, processing
Procedia PDF Downloads 42917399 Application of a Universal Distortion Correction Method in Stereo-Based Digital Image Correlation Measurement
Authors: Hu Zhenxing, Gao Jianxin
Abstract:
Stereo-based digital image correlation (also referred to as three-dimensional (3D) digital image correlation (DIC)) is a technique for both 3D shape and surface deformation measurement of a component, which has found increasing applications in academia and industries. The accuracy of the reconstructed coordinate depends on many factors such as configuration of the setup, stereo-matching, distortion, etc. Most of these factors have been investigated in literature. For instance, the configuration of a binocular vision system determines the systematic errors. The stereo-matching errors depend on the speckle quality and the matching algorithm, which can only be controlled in a limited range. And the distortion is non-linear particularly in a complex imaging acquisition system. Thus, the distortion correction should be carefully considered. Moreover, the distortion function is difficult to formulate in a complex imaging acquisition system using conventional models in such cases where microscopes and other complex lenses are involved. The errors of the distortion correction will propagate to the reconstructed 3D coordinates. To address the problem, an accurate mapping method based on 2D B-spline functions is proposed in this study. The mapping functions are used to convert the distorted coordinates into an ideal plane without distortions. This approach is suitable for any image acquisition distortion models. It is used as a prior process to convert the distorted coordinate to an ideal position, which enables the camera to conform to the pin-hole model. A procedure of this approach is presented for stereo-based DIC. Using 3D speckle image generation, numerical simulations were carried out to compare the accuracy of both the conventional method and the proposed approach.Keywords: distortion, stereo-based digital image correlation, b-spline, 3D, 2D
Procedia PDF Downloads 50017398 The Use of Artificial Intelligence to Harmonization in the Lawmaking Process
Authors: Supriyadi, Andi Intan Purnamasari, Aminuddin Kasim, Sulbadana, Mohammad Reza
Abstract:
The development of the Industrial Revolution Era 4.0 brought a significant influence in the administration of countries in all parts of the world, including Indonesia, not only in the administration and economic sectors but the ways and methods of forming laws should also be adjusted. Until now, the process of making laws carried out by the Parliament with the Government still uses the classical method. The law-making process still uses manual methods, such as typing harmonization of regulations, so that it is not uncommon for errors to occur, such as writing errors, copying articles and so on, things that require a high level of accuracy and relying on inventory and harmonization carried out manually by humans. However, this method often creates several problems due to errors and inaccuracies on the part of officers who harmonize laws after discussion and approval; this has a very serious impact on the system of law formation in Indonesia. The use of artificial intelligence in the process of forming laws seems to be justified and becomes the answer in order to minimize the disharmony of various laws and regulations. This research is normative research using the Legislative Approach and the Conceptual Approach. This research focuses on the question of how to use Artificial Intelligence for Harmonization in the Lawmaking Process.Keywords: artificial intelligence, harmonization, laws, intelligence
Procedia PDF Downloads 16317397 A Deterministic Approach for Solving the Hull and White Interest Rate Model with Jump Process
Authors: Hong-Ming Chen
Abstract:
This work considers the resolution of the Hull and White interest rate model with the jump process. A deterministic process is adopted to model the random behavior of interest rate variation as deterministic perturbations, which is depending on the time t. The Brownian motion and jumps uncertainty are denoted as the integral functions piecewise constant function w(t) and point function θ(t). It shows that the interest rate function and the yield function of the Hull and White interest rate model with jump process can be obtained by solving a nonlinear semi-infinite programming problem. A relaxed cutting plane algorithm is then proposed for solving the resulting optimization problem. The method is calibrated for the U.S. treasury securities at 3-month data and is used to analyze several effects on interest rate prices, including interest rate variability, and the negative correlation between stock returns and interest rates. The numerical results illustrate that our approach essentially generates the yield functions with minimal fitting errors and small oscillation.Keywords: optimization, interest rate model, jump process, deterministic
Procedia PDF Downloads 16117396 An Analysis of L1 Effects on the Learning of EFL: A Case Study of Undergraduate EFL Learners at Universities in Pakistan
Authors: Nadir Ali Mugheri, Shaukat Ali Lohar
Abstract:
In a multilingual society like Pakistan, code switching is commonly observed in different contexts. Mostly people use L1 (Native Languages) and L2 for common communications and L3 (i.e. English, Urdu, Sindhi) in formal contexts and for academic writings. Such a frequent code switching does affect EFL learners' acquisition of grammar and lexis of the target language which in the long run result in different types of errors in their writings. The current study is to investigate and identify common elements of L1 and L2 (spoken by students of the Universities in Pakistan) which create hindrances for EFL learners. Case study method was used for this research. Formal writings of 400 EFL learners (as participants from various Universities of the country) were observed. Among 400 participants, 200 were female and 200 were male EFL learners having different academic backgrounds. Errors found were categorized into different types according to grammatical items, the difference in meanings, structure of sentences and identifiers of tenses of L1 or L2 in comparison with those of the target language. The findings showed that EFL learners in Pakistani varsities have serious problems in their writings and they committed serious errors related to the grammar and meanings of the target language. After analysis of the committed errors, the results were found in the affirmation of the hypothesis that L1 or L2 does affect EFL learners. The research suggests in the end to adopt natural ways in pedagogy like task-based learning or communicative methods using contextualized material so as to avoid impediments of L1 or L2 in acquisition the target language.Keywords: multilingualism, L2 acquisition, code switching, language acquisition, communicative language teaching
Procedia PDF Downloads 29117395 Measurement Errors and Misclassifications in Covariates in Logistic Regression: Bayesian Adjustment of Main and Interaction Effects and the Sample Size Implications
Authors: Shahadut Hossain
Abstract:
Measurement errors in continuous covariates and/or misclassifications in categorical covariates are common in epidemiological studies. Regression analysis ignoring such mismeasurements seriously biases the estimated main and interaction effects of covariates on the outcome of interest. Thus, adjustments for such mismeasurements are necessary. In this research, we propose a Bayesian parametric framework for eliminating deleterious impacts of covariate mismeasurements in logistic regression. The proposed adjustment method is unified and thus can be applied to any generalized linear and non-linear regression models. Furthermore, adjustment for covariate mismeasurements requires validation data usually in the form of either gold standard measurements or replicates of the mismeasured covariates on a subset of the study population. Initial investigation shows that adequacy of such adjustment depends on the sizes of main and validation samples, especially when prevalences of the categorical covariates are low. Thus, we investigate the impact of main and validation sample sizes on the adjusted estimates, and provide a general guideline about these sample sizes based on simulation studies.Keywords: measurement errors, misclassification, mismeasurement, validation sample, Bayesian adjustment
Procedia PDF Downloads 40917394 Frailty Models for Modeling Heterogeneity: Simulation Study and Application to Quebec Pension Plan
Authors: Souad Romdhane, Lotfi Belkacem
Abstract:
When referring to actuarial analysis of lifetime, only models accounting for observable risk factors have been developed. Within this context, Cox proportional hazards model (CPH model) is commonly used to assess the effects of observable covariates as gender, age, smoking habits, on the hazard rates. These covariates may fail to fully account for the true lifetime interval. This may be due to the existence of another random variable (frailty) that is still being ignored. The aim of this paper is to examine the shared frailty issue in the Cox proportional hazard model by including two different parametric forms of frailty into the hazard function. Four estimated methods are used to fit them. The performance of the parameter estimates is assessed and compared between the classical Cox model and these frailty models through a real-life data set from the Quebec Pension Plan and then using a more general simulation study. This performance is investigated in terms of the bias of point estimates and their empirical standard errors in both fixed and random effect parts. Both the simulation and the real dataset studies showed differences between classical Cox model and shared frailty model.Keywords: life insurance-pension plan, survival analysis, risk factors, cox proportional hazards model, multivariate failure-time data, shared frailty, simulations study
Procedia PDF Downloads 35917393 Short Arc Technique for Baselines Determinations
Authors: Gamal F.Attia
Abstract:
The baselines are the distances and lengths of the chords between projections of the positions of the laser stations on the reference ellipsoid. For the satellite geodesy, it is very important to determine the optimal length of orbital arc along which laser measurements are to be carried out. It is clear that for the dynamical methods long arcs (one month or more) are to be used. According to which more errors of modeling of different physical forces such as earth's gravitational field, air drag, solar radiation pressure, and others that may influence the accuracy of the estimation of the satellites position, at the same time the measured errors con be almost completely excluded and high stability in determination of relative coordinate system can be achieved. It is possible to diminish the influence of the errors of modeling by using short-arcs of the satellite orbit (several revolutions or days), but the station's coordinates estimated by different arcs con differ from each other by a larger quantity than statistical zero. Under the semidynamical ‘short arc’ method one or several passes of the satellite in one of simultaneous visibility from both ends of the chord is known and the estimated parameter in this case is the length of the chord. The comparison of the same baselines calculated with long and short arcs methods shows a good agreement and even speaks in favor of the last one. In this paper the Short Arc technique has been explained and 3 baselines have been determined using the ‘short arc’ method.Keywords: baselines, short arc, dynamical, gravitational field
Procedia PDF Downloads 46517392 Preparation Control Information and Analyzing of Metering Gas System Based of Orifice Plate
Authors: A. Harrouz, A. Benatiallah, O. Harrouz
Abstract:
This paper presents the search for errors in the measurement instruments in a dynamic system of metering liquid or gas and sees the tolerance defined by the international standards and recommendations. We will implement a program on MATLAB/Simulink which is calculated based on the ISO-5167. This program will take the system parameters on considerations such as: the willingness plates, the size of the orifice, the given design conditions, reference conditions, find pressure drop for a given flow, or flow for a loss of given load. The results are considered very good and satisfactory because the errors identified of measuring instruments system are within the margin of error limit by the regulations.Keywords: analyzing, control, gas, meters system
Procedia PDF Downloads 39917391 Leverage Effect for Volatility with Generalized Laplace Error
Authors: Farrukh Javed, Krzysztof Podgórski
Abstract:
We propose a new model that accounts for the asymmetric response of volatility to positive ('good news') and negative ('bad news') shocks in economic time series the so-called leverage effect. In the past, asymmetric powers of errors in the conditionally heteroskedastic models have been used to capture this effect. Our model is using the gamma difference representation of the generalized Laplace distributions that efficiently models the asymmetry. It has one additional natural parameter, the shape, that is used instead of power in the asymmetric power models to capture the strength of a long-lasting effect of shocks. Some fundamental properties of the model are provided including the formula for covariances and an explicit form for the conditional distribution of 'bad' and 'good' news processes given the past the property that is important for the statistical fitting of the model. Relevant features of volatility models are illustrated using S&P 500 historical data.Keywords: heavy tails, volatility clustering, generalized asymmetric laplace distribution, leverage effect, conditional heteroskedasticity, asymmetric power volatility, GARCH models
Procedia PDF Downloads 38617390 Kinetic Study of Thermal Degradation of a Lignin Nanoparticle-Reinforced Phenolic Foam
Authors: Juan C. Domínguez, Belén Del Saz-Orozco, María V. Alonso, Mercedes Oliet, Francisco Rodríguez
Abstract:
In the present study, the kinetics of thermal degradation of a phenolic and lignin reinforced phenolic foams, and the lignin used as reinforcement were studied and the activation energies of their degradation processes were obtained by a DAEM model. The average values for five heating rates of the mean activation energies obtained were: 99.1, 128.2, and 144.0 kJ.mol-1 for the phenolic foam, 109.5, 113.3, and 153.0 kJ.mol-1 for the lignin reinforcement, and 82.1, 106.9, and 124.4 kJ. mol-1 for the lignin reinforced phenolic foam. The standard deviation ranges calculated for each sample were 1.27-8.85, 2.22-12.82, and 3.17-8.11 kJ.mol-1 for the phenolic foam, lignin and the reinforced foam, respectively. The DAEM model showed low mean square errors (< 1x10-5), proving that is a suitable model to study the kinetics of thermal degradation of the foams and the reinforcement.Keywords: kinetics, lignin, phenolic foam, thermal degradation
Procedia PDF Downloads 48817389 A Comparative Study of Cognitive Factors Affecting Social Distancing among Vaccinated and Unvaccinated Filipinos
Authors: Emmanuel Carlo Belara, Albert John Dela Merced, Mark Anthony Dominguez, Diomari Erasga, Jerome Ferrer, Bernard Ombrog
Abstract:
Social distancing errors are a common prevalence between vaccinated and unvaccinated in the Filipino community. This study aims to identify and relate the factors on how they affect our daily lives. Observed factors include memory, attention, anxiety, decision-making, and stress. Upon applying the ergonomic tools and statistical treatment such as t-test and multiple linear regression, stress and attention turned out to have the most impact to the errors of social distancing.Keywords: vaccinated, unvaccinated, socoal distancing, filipinos
Procedia PDF Downloads 20317388 Market Illiquidity and Pricing Errors in the Term Structure of CDS
Authors: Lidia Sanchis-Marco, Antonio Rubia, Pedro Serrano
Abstract:
This paper studies the informational content of pricing errors in the term structure of sovereign CDS spreads. The residuals from a non-arbitrage model are employed to construct a Price discrepancy estimate, or noise measure. The noise estimate is understood as an indicator of market distress and reflects frictions such as illiquidity. Empirically, the noise measure is computed for an extensive panel of CDS spreads. Our results reveal an important fraction of systematic risk is not priced in default swap contracts. When projecting the noise measure onto a set of financial variables, the panel-data estimates show that greater price discrepancies are systematically related to a higher level of offsetting transactions of CDS contracts. This evidence suggests that arbitrage capital flows exit the marketplace during time of distress, and this consistent with a market segmentation among investors and arbitrageurs where professional arbitrageurs are particularly ineffective at bringing prices to their fundamental values during turbulent periods. Our empirical findings are robust for the most common CDS pricing models employed in the industry.Keywords: credit default swaps, noise measure, illiquidity, capital arbitrage
Procedia PDF Downloads 56917387 Reasons for the Selection of Information-Processing Framework and the Philosophy of Mind as a General Account for an Error Analysis and Explanation on Mathematics
Authors: Michael Lousis
Abstract:
This research study is concerned with learner’s errors on Arithmetic and Algebra. The data resulted from a broader international comparative research program called Kassel Project. However, its conceptualisation differed from and contrasted with that of the main program, which was mostly based on socio-demographic data. The way in which the research study was conducted, was not dependent on the researcher’s discretion, but was absolutely dictated by the nature of the problem under investigation. This is because the phenomenon of learners’ mathematical errors is due neither to the intentions of learners nor to institutional processes, rules and norms, nor to the educators’ intentions and goals; but rather to the way certain information is presented to learners and how their cognitive apparatus processes this information. Several approaches for the study of learners’ errors have been developed from the beginning of the 20th century, encompassing different belief systems. These approaches were based on the behaviourist theory, on the Piagetian- constructivist research framework, the perspective that followed the philosophy of science and the information-processing paradigm. The researcher of the present study was forced to disclose the learners’ course of thinking that led them in specific observable actions with the result of showing particular errors in specific problems, rather than analysing scripts with the students’ thoughts presented in a written form. This, in turn, entailed that the choice of methods would have to be appropriate and conducive to seeing and realising the learners’ errors from the perspective of the participants in the investigation. This particular fact determined important decisions to be made concerning the selection of an appropriate framework for analysing the mathematical errors and giving explanations. Thus the rejection of the belief systems concerning behaviourism, the Piagetian-constructivist, and philosophy of science perspectives took place, and the information-processing paradigm in conjunction with the philosophy of mind were adopted as a general account for the elaboration of data. This paper explains why these decisions were appropriate and beneficial for conducting the present study and for the establishment of the ensued thesis. Additionally, the reasons for the adoption of the information-processing paradigm in conjunction with the philosophy of mind give sound and legitimate bases for the development of future studies concerning mathematical error analysis are explained.Keywords: advantages-disadvantages of theoretical prospects, behavioral prospect, critical evaluation of theoretical prospects, error analysis, information-processing paradigm, opting for the appropriate approach, philosophy of science prospect, Piagetian-constructivist research frameworks, review of research in mathematical errors
Procedia PDF Downloads 19117386 Model for Introducing Products to New Customers through Decision Tree Using Algorithm C4.5 (J-48)
Authors: Komol Phaisarn, Anuphan Suttimarn, Vitchanan Keawtong, Kittisak Thongyoun, Chaiyos Jamsawang
Abstract:
This article is intended to analyze insurance information which contains information on the customer decision when purchasing life insurance pay package. The data were analyzed in order to present new customers with Life Insurance Perfect Pay package to meet new customers’ needs as much as possible. The basic data of insurance pay package were collect to get data mining; thus, reducing the scattering of information. The data were then classified in order to get decision model or decision tree using Algorithm C4.5 (J-48). In the classification, WEKA tools are used to form the model and testing datasets are used to test the decision tree for the accurate decision. The validation of this model in classifying showed that the accurate prediction was 68.43% while 31.25% were errors. The same set of data were then tested with other models, i.e. Naive Bayes and Zero R. The results showed that J-48 method could predict more accurately. So, the researcher applied the decision tree in writing the program used to introduce the product to new customers to persuade customers’ decision making in purchasing the insurance package that meets the new customers’ needs as much as possible.Keywords: decision tree, data mining, customers, life insurance pay package
Procedia PDF Downloads 42817385 Numerical Analysis of the Turbulent Flow around DTMB 4119 Marine Propeller
Authors: K. Boumediene, S. E. Belhenniche
Abstract:
This article presents a numerical analysis of a turbulent flow past DTMB 4119 marine propeller by the means of RANS approach; the propeller designed at David Taylor Model Basin in USA. The purpose of this study is to predict the hydrodynamic performance of the marine propeller, it aims also to compare the results obtained with the experiment carried out in open water tests; a periodical computational domain was created to reduce the unstructured mesh size generated. The standard kw turbulence model for the simulation is selected; the results were in a good agreement. Therefore, the errors were estimated respectively to 1.3% and 5.9% for KT and KQ.Keywords: propeller flow, CFD simulation, RANS, hydrodynamic performance
Procedia PDF Downloads 50117384 Creation and Implementation of A New Palliative Care Drug Chart, via A Closed-Loop Audit
Authors: Asfa Hussain, Chee Tang, Mien Nguyen
Abstract:
Introduction: The safe usage of medications is dependent on clear, well-documented prescribing. Medical drug charts should be regularly checked to ensure that they are fit for purpose. Aims: The purpose of this study was to evaluate whether the Isabel Hospice drug charts were effective or prone to medical errors. The aim was to create a comprehensive palliative care drug chart in line with medico-legal guidelines and to minimise drug administration and prescription errors. Methodology: 50 medical drug charts were audited from March to April 2020, to assess whether they complied with medico-legal guidelines, in a hospice within East of England. Meetings were held with the larger multi-disciplinary team (MDT), including the pharmacists, nursing staff and doctors, to raise awareness of the issue. A preliminary drug chart was created, using the input from the wider MDT. The chart was revised and trialled over 15 times, and each time feedback from the MDT was incorporated into the subsequent template. In the midst of the COVID-19 pandemic in September 2020, the finalised drug chart was trialled. 50 new palliative drug charts were re-audited, to evaluate the changes made. Results: Prescribing and administration errors were high prior to the implementation of the new chart. This improved significantly after introducing the new drug charts, therefore improving patient safety and care. The percentage of inadequately documented allergies went down from 66% to 20% and incorrect oxygen prescription from 40% to 16%. The prescription drug-drug interactions decreased by 30%. Conclusion: It is vital to have clear standardised drug charts, in line with medico-legal standards, to allow ease of prescription and administration of medications and ensure optimum patient-centred care. This closed loop audit demonstrated significant improvement in documentation and prevention of possible fatal drug errors and interactions.Keywords: palliative care, drug chart, medication errors, drug-drug interactions, COVID-19, patient safety
Procedia PDF Downloads 17617383 Estimating Anthropometric Dimensions for Saudi Males Using Artificial Neural Networks
Authors: Waleed Basuliman
Abstract:
Anthropometric dimensions are considered one of the important factors when designing human-machine systems. In this study, the estimation of anthropometric dimensions has been improved by using Artificial Neural Network (ANN) model that is able to predict the anthropometric measurements of Saudi males in Riyadh City. A total of 1427 Saudi males aged 6 to 60 years participated in measuring 20 anthropometric dimensions. These anthropometric measurements are considered important for designing the work and life applications in Saudi Arabia. The data were collected during eight months from different locations in Riyadh City. Five of these dimensions were used as predictors variables (inputs) of the model, and the remaining 15 dimensions were set to be the measured variables (Model’s outcomes). The hidden layers varied during the structuring stage, and the best performance was achieved with the network structure 6-25-15. The results showed that the developed Neural Network model was able to estimate the body dimensions of Saudi male population in Riyadh City. The network's mean absolute percentage error (MAPE) and the root mean squared error (RMSE) were found to be 0.0348 and 3.225, respectively. These results were found less, and then better, than the errors found in the literature. Finally, the accuracy of the developed neural network was evaluated by comparing the predicted outcomes with regression model. The ANN model showed higher coefficient of determination (R2) between the predicted and actual dimensions than the regression model.Keywords: artificial neural network, anthropometric measurements, back-propagation
Procedia PDF Downloads 48817382 The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data
Authors: Y. Sunecher, N. Mamode Khan, V. Jowaheer
Abstract:
This paper considers the modelling of a non-stationary bivariate integer-valued autoregressive moving average of order one (BINARMA(1,1)) with correlated Poisson innovations. The BINARMA(1,1) model is specified using the binomial thinning operator and by assuming that the cross-correlation between the two series is induced by the innovation terms only. Based on these assumptions, the non-stationary marginal and joint moments of the BINARMA(1,1) are derived iteratively by using some initial stationary moments. As regards to the estimation of parameters of the proposed model, the conditional maximum likelihood (CML) estimation method is derived based on thinning and convolution properties. The forecasting equations of the BINARMA(1,1) model are also derived. A simulation study is also proposed where BINARMA(1,1) count data are generated using a multivariate Poisson R code for the innovation terms. The performance of the BINARMA(1,1) model is then assessed through a simulation experiment and the mean estimates of the model parameters obtained are all efficient, based on their standard errors. The proposed model is then used to analyse a real-life accident data on the motorway in Mauritius, based on some covariates: policemen, daily patrol, speed cameras, traffic lights and roundabouts. The BINARMA(1,1) model is applied on the accident data and the CML estimates clearly indicate a significant impact of the covariates on the number of accidents on the motorway in Mauritius. The forecasting equations also provide reliable one-step ahead forecasts.Keywords: non-stationary, BINARMA(1, 1) model, Poisson innovations, conditional maximum likelihood, CML
Procedia PDF Downloads 12917381 A Spatial Approach to Model Mortality Rates
Authors: Yin-Yee Leong, Jack C. Yue, Hsin-Chung Wang
Abstract:
Human longevity has been experiencing its largest increase since the end of World War II, and modeling the mortality rates is therefore often the focus of many studies. Among all mortality models, the Lee–Carter model is the most popular approach since it is fairly easy to use and has good accuracy in predicting mortality rates (e.g., for Japan and the USA). However, empirical studies from several countries have shown that the age parameters of the Lee–Carter model are not constant in time. Many modifications of the Lee–Carter model have been proposed to deal with this problem, including adding an extra cohort effect and adding another period effect. In this study, we propose a spatial modification and use clusters to explain why the age parameters of the Lee–Carter model are not constant. In spatial analysis, clusters are areas with unusually high or low mortality rates than their neighbors, where the “location” of mortality rates is measured by age and time, that is, a 2-dimensional coordinate. We use a popular cluster detection method—Spatial scan statistics, a local statistical test based on the likelihood ratio test to evaluate where there are locations with mortality rates that cannot be described well by the Lee–Carter model. We first use computer simulation to demonstrate that the cluster effect is a possible source causing the problem of the age parameters not being constant. Next, we show that adding the cluster effect can solve the non-constant problem. We also apply the proposed approach to mortality data from Japan, France, the USA, and Taiwan. The empirical results show that our approach has better-fitting results and smaller mean absolute percentage errors than the Lee–Carter model.Keywords: mortality improvement, Lee–Carter model, spatial statistics, cluster detection
Procedia PDF Downloads 17217380 Interlingual Interference in Students’ Writing
Authors: Zakaria Khatraoui
Abstract:
Interlanguage has transcendentally capitalized its central role over a considerable metropolitan landscape. Either academically driven or pedagogically oriented, Interlanguage has principally floated as important than ever before. It academically probes theoretical and linguistic issues in the turf and further malleably flows from idea to reality to vindicate a bridging philosophy between theory and educational rehearsal. Characteristically, the present research grants a prolifically developed theoretical framework that is conversely sustained by empirical teaching practices, along with teasing apart the narrowly confined implementation. The focus of this interlingual study is placed stridently on syntactic errors projected in students’ writing as performance. To attain this endeavor, the paper appropriates qualitatively a plethora of focal methodological choices sponsored by a solid design. The steadily undeniable ipso facto to be examined is the creative sense of syntactic errors unequivocally endorsed by the tangible dominance of cognitively intralingual errors over linguistically interlingual ones. Subsequently, this paper attempts earnestly to highlight transferable implications worth indicating both theoretical and pedagogically professional principles. In particular, results are fundamentally relative to the scholarly community in a multidimensional sense to recommend actions of educational value.Keywords: interlanguage, interference, error, writing
Procedia PDF Downloads 7517379 Improving Health Care and Patient Safety at the ICU by Using Innovative Medical Devices and ICT Tools: Examples from Bangladesh
Authors: Mannan Mridha, Mohammad S. Islam
Abstract:
Innovative medical technologies offer more effective medical care, with less risk to patient and healthcare personnel. Medical technology and devices when properly used provide better data, precise monitoring and less invasive treatments and can be more targeted and often less costly. The Intensive Care Unit (ICU) equipped with patient monitoring, respiratory and cardiac support, pain management, emergency resuscitation and life support devices is particularly prone to medical errors for various reasons. Many people in the developing countries now wonder whether their visit to hospital might harm rather than help them. This is because; clinicians in the developing countries are required to maintain an increasing workload with limited resources and absence of well-functioning safety system. A team of experts from the medical, biomedical and clinical engineering in Sweden and Bangladesh have worked together to study the incidents, adverse events at the ICU in Bangladesh. The study included both public and private hospitals to provide a better understanding for physical structure, organization and practice in operating processes of care, and the occurrence of adverse outcomes the errors, risks and accidents related to medical devices at the ICU, and to develop a ICT based support system in order to reduce hazards and errors and thus improve the quality of performance, care and cost effectiveness at the ICU. Concrete recommendations and guidelines have been made for preparing appropriate ICT related tools and methods for improving the routine for use of medical devices, reporting and analyzing of the incidents at the ICU in order to reduce the number of undetected and unsolved incidents and thus improve the patient safety.Keywords: intensive care units, medical errors, medical devices, patient care and safety
Procedia PDF Downloads 14917378 Measuring the Effectiveness of Response Inhibition regarding to Motor Complexity: Evidence from the Stroop Effect
Authors: Germán Gálvez-García, Marta Lavin, Javiera Peña, Javier Albayay, Claudio Bascour, Jesus Fernandez-Gomez, Alicia Pérez-Gálvez
Abstract:
We studied the effectiveness of response inhibition in movements with different degrees of motor complexity when they were executed in isolation and alternately. Sixteen participants performed the Stroop task which was used as a measure of response inhibition. Participants responded by lifting the index finger and reaching the screen with the same finger. Both actions were performed separately and alternately in different experimental blocks. Repeated measures ANOVAs were used to compare reaction time, movement time, kinematic errors and Movement errors across conditions (experimental block, movement, and congruency). Delta plots were constructed to perform distributional analyses of response inhibition and accuracy rate. The effectiveness of response inhibition did not show difference when the movements were performed in separated blocks. Nevertheless, it showed differences when they were performed alternately in the same experimental block, being more effective for the lifting action. This could be due to a competition of the available resources during a more complex scenario which also demands to adopt some strategy to avoid errors.Keywords: response inhibition, motor complexity, Stroop task, delta plots
Procedia PDF Downloads 39417377 Efficiency of Google Translate and Bing Translator in Translating Persian-to-English Texts
Authors: Samad Sajjadi
Abstract:
Machine translation is a new subject increasingly being used by academic writers, especially students and researchers whose native language is not English. There are numerous studies conducted on machine translation, but few investigations have assessed the accuracy of machine translation from Persian to English at lexical, semantic, and syntactic levels. Using Groves and Mundt’s (2015) Model of error taxonomy, the current study evaluated Persian-to-English translations produced by two famous online translators, Google Translate and Bing Translator. A total of 240 texts were randomly selected from different academic fields (law, literature, medicine, and mass media), and 60 texts were considered for each domain. All texts were rendered by the two translation systems and then by four human translators. All statistical analyses were applied using SPSS. The results indicated that Google translations were more accurate than the translations produced by the Bing Translator, especially in the domains of medicine (lexis: 186 vs. 225; semantic: 44 vs. 48; syntactic: 148 vs. 264 errors) and mass media (lexis: 118 vs. 149; semantic: 25 vs. 32; syntactic: 110 vs. 220 errors), respectively. Nonetheless, both machines are reasonably accurate in Persian-to-English translation of lexicons and syntactic structures, particularly from mass media and medical texts.Keywords: machine translations, accuracy, human translation, efficiency
Procedia PDF Downloads 7817376 FT-NIR Method to Determine Moisture in Gluten Free Rice-Based Pasta during Drying
Authors: Navneet Singh Deora, Aastha Deswal, H. N. Mishra
Abstract:
Pasta is one of the most widely consumed food products around the world. Rapid determination of the moisture content in pasta will assist food processors to provide online quality control of pasta during large scale production. Rapid Fourier transform near-infrared method (FT-NIR) was developed for determining moisture content in pasta. A calibration set of 150 samples, a validation set of 30 samples and a prediction set of 25 samples of pasta were used. The diffuse reflection spectra of different types of pastas were measured by FT-NIR analyzer in the 4,000-12,000 cm-1 spectral range. Calibration and validation sets were designed for the conception and evaluation of the method adequacy in the range of moisture content 10 to 15 percent (w.b) of the pasta. The prediction models based on partial least squares (PLS) regression, were developed in the near-infrared. Conventional criteria such as the R2, the root mean square errors of cross validation (RMSECV), root mean square errors of estimation (RMSEE) as well as the number of PLS factors were considered for the selection of three pre-processing (vector normalization, minimum-maximum normalization and multiplicative scatter correction) methods. Spectra of pasta sample were treated with different mathematic pre-treatments before being used to build models between the spectral information and moisture content. The moisture content in pasta predicted by FT-NIR methods had very good correlation with their values determined via traditional methods (R2 = 0.983), which clearly indicated that FT-NIR methods could be used as an effective tool for rapid determination of moisture content in pasta. The best calibration model was developed with min-max normalization (MMN) spectral pre-processing (R2 = 0.9775). The MMN pre-processing method was found most suitable and the maximum coefficient of determination (R2) value of 0.9875 was obtained for the calibration model developed.Keywords: FT-NIR, pasta, moisture determination, food engineering
Procedia PDF Downloads 25817375 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution
Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone
Abstract:
The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder
Procedia PDF Downloads 114