Search results for: sum of squared errors (SSE)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1124

Search results for: sum of squared errors (SSE)

884 Estimation of Normalized Glandular Doses Using a Three-Layer Mammographic Phantom

Authors: Kuan-Jen Lai, Fang-Yi Lin, Shang-Rong Huang, Yun-Zheng Zeng, Po-Chieh Hsu, Jay Wu

Abstract:

The normalized glandular dose (DgN) estimates the energy deposition of mammography in clinical practice. The Monte Carlo simulations frequently use uniformly mixed phantom for calculating the conversion factor. However, breast tissues are not uniformly distributed, leading to errors of conversion factor estimation. This study constructed a three-layer phantom to estimated more accurate of normalized glandular dose. In this study, MCNP code (Monte Carlo N-Particles code) was used to create the geometric structure. We simulated three types of target/filter combinations (Mo/Mo, Mo/Rh, Rh/Rh), six voltages (25 ~ 35 kVp), six HVL parameters and nine breast phantom thicknesses (2 ~ 10 cm) for the three-layer mammographic phantom. The conversion factor for 25%, 50% and 75% glandularity was calculated. The error of conversion factors compared with the results of the American College of Radiology (ACR) was within 6%. For Rh/Rh, the difference was within 9%. The difference between the 50% average glandularity and the uniform phantom was 7.1% ~ -6.7% for the Mo/Mo combination, voltage of 27 kVp, half value layer of 0.34 mmAl, and breast thickness of 4 cm. According to the simulation results, the regression analysis found that the three-layer mammographic phantom at 0% ~ 100% glandularity can be used to accurately calculate the conversion factors. The difference in glandular tissue distribution leads to errors of conversion factor calculation. The three-layer mammographic phantom can provide accurate estimates of glandular dose in clinical practice.

Keywords: Monte Carlo simulation, mammography, normalized glandular dose, glandularity

Procedia PDF Downloads 173
883 Neuropsychological Aspects in Adolescents Victims of Sexual Violence with Post-Traumatic Stress Disorder

Authors: Fernanda Mary R. G. Da Silva, Adriana C. F. Mozzambani, Marcelo F. Mello

Abstract:

Introduction: Sexual assault against children and adolescents is a public health problem with serious consequences on their quality of life, especially for those who develop post-traumatic stress disorder (PTSD). The broad literature in this research area points to greater losses in verbal learning, explicit memory, speed of information processing, attention and executive functioning in PTSD. Objective: To compare the neuropsychological functions of adolescents from 14 to 17 years of age, victims of sexual violence with PTSD with those of healthy controls. Methodology: Application of a neuropsychological battery composed of the following subtests: WASI vocabulary and matrix reasoning; Digit subtests (WISC-IV); verbal auditory learning test RAVLT; Spatial Span subtest of the WMS - III scale; abbreviated version of the Wisconsin test; concentrated attention test - D2; prospective memory subtest of the NEUPSILIN scale; five-digit test - FDT and the Stroop test (Trenerry version) in adolescents with a history of sexual violence in the previous six months, referred to the Prove (Violence Care and Research Program of the Federal University of São Paulo), for further treatment. Results: The results showed a deficit in the word coding process in the RAVLT test, with impairment in A3 (p = 0.004) and A4 (p = 0.016) measures, which compromises the verbal learning process (p = 0.010) and the verbal recognition memory (p = 0.012), seeming to present a worse performance in the acquisition of verbal information that depends on the support of the attentional system. A worse performance was found in list B (p = 0.047), a lower priming effect p = 0.026, that is, lower evocation index of the initial words presented and less perseveration (p = 0.002), repeated words. Therefore, there seems to be a failure in the creation of strategies that help the mnemonic process of retention of the verbal information necessary for learning. Sustained attention was found to be impaired, with greater loss of setting in the Wisconsin test (p = 0.023), a lower rate of correct responses in stage C of the Stroop test (p = 0.023) and, consequently, a higher index of erroneous responses in C of the Stroop test (p = 0.023), besides more type II errors in the D2 test (p = 0.008). A higher incidence of total errors was observed in the reading stage of the FDT test p = 0.002, which suggests fatigue in the execution of the task. Performance is compromised in executive functions in the cognitive flexibility ability, suggesting a higher index of total errors in the alternating step of the FDT test (p = 0.009), as well as a greater number of persevering errors in the Wisconsin test (p = 0.004). Conclusion: The data from this study suggest that sexual violence and PTSD cause significant impairment in the neuropsychological functions of adolescents, evidencing risk to quality of life in stages that are fundamental for the development of learning and cognition.

Keywords: adolescents, neuropsychological functions, PTSD, sexual violence

Procedia PDF Downloads 120
882 Melanoma and Non-Melanoma, Skin Lesion Classification, Using a Deep Learning Model

Authors: Shaira L. Kee, Michael Aaron G. Sy, Myles Joshua T. Tan, Hezerul Abdul Karim, Nouar AlDahoul

Abstract:

Skin diseases are considered the fourth most common disease, with melanoma and non-melanoma skin cancer as the most common type of cancer in Caucasians. The alarming increase in Skin Cancer cases shows an urgent need for further research to improve diagnostic methods, as early diagnosis can significantly improve the 5-year survival rate. Machine Learning algorithms for image pattern analysis in diagnosing skin lesions can dramatically increase the accuracy rate of detection and decrease possible human errors. Several studies have shown the diagnostic performance of computer algorithms outperformed dermatologists. However, existing methods still need improvements to reduce diagnostic errors and generate efficient and accurate results. Our paper proposes an ensemble method to classify dermoscopic images into benign and malignant skin lesions. The experiments were conducted using the International Skin Imaging Collaboration (ISIC) image samples. The dataset contains 3,297 dermoscopic images with benign and malignant categories. The results show improvement in performance with an accuracy of 88% and an F1 score of 87%, outperforming other existing models such as support vector machine (SVM), Residual network (ResNet50), EfficientNetB0, EfficientNetB4, and VGG16.

Keywords: deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma

Procedia PDF Downloads 66
881 Postpartum Depression and Its Association with Food Insecurity and Social Support among Women in Post-Conflict Northern Uganda

Authors: Kimton Opiyo, Elliot M. Berry, Patil Karamchand, Barnabas K. Natamba

Abstract:

Background: Postpartum depression (PPD) is a major psychiatric disorder that affects women soon after birth and in some cases, is a continuation of antenatal depression. Food insecurity (FI) and social support (SS) are known to be associated with major depressive disorder, and vice versa. This study was conducted to examine the interrelationships among FI, SS, and PPD among postpartum women in Gulu, a post-conflict region in Uganda. Methods: Cross-sectional data from postpartum women on depression symptoms, FI and SS were, respectively, obtained using the Center for Epidemiologic Studies-Depression (CES-D) scale, Individually Focused FI Access scale (IFIAS) and Duke-UNC functional social support scale. Standard regression methods were used to assess associations among FI, SS, and PPD. Results: A total of 239 women were studied, and 40% were found to have any PPD, i.e., with depressive symptom scores of ≥ 17. The mean ± standard deviation (SD) for FI score and SS scores were 6.47 ± 5.02 and 19.11 ± 4.23 respectively. In adjusted analyses, PPD symptoms were found to be positively associated with FI (unstandardized beta and standardized beta of 0.703 and 0.432 respectively, standard errors =0.093 and p-value < 0.0001) and negatively associated with SS (unstandardized beta and standardized beta of -0.263 and -0.135 respectively, standard errors = 0.111 and p-value = 0.019). Conclusions: Many women in this post-conflict region reported experiencing PPD. In addition, this data suggest that food security and psychosocial support interventions may help mitigate women’s experience of PPD or its severity.

Keywords: postpartum depression, food insecurity, social support, post-conflict region

Procedia PDF Downloads 152
880 On Phase Based Stereo Matching and Its Related Issues

Authors: András Rövid, Takeshi Hashimoto

Abstract:

The paper focuses on the problem of the point correspondence matching in stereo images. The proposed matching algorithm is based on the combination of simpler methods such as normalized sum of squared differences (NSSD) and a more complex phase correlation based approach, by considering the noise and other factors, as well. The speed of NSSD and the preciseness of the phase correlation together yield an efficient approach to find the best candidate point with sub-pixel accuracy in stereo image pairs. The task of the NSSD in this case is to approach the candidate pixel roughly. Afterwards the location of the candidate is refined by an enhanced phase correlation based method which in contrast to the NSSD has to run only once for each selected pixel.

Keywords: stereo matching, sub-pixel accuracy, phase correlation, SVD, NSSD

Procedia PDF Downloads 450
879 Constructions of Linear and Robust Codes Based on Wavelet Decompositions

Authors: Alla Levina, Sergey Taranov

Abstract:

The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.

Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability

Procedia PDF Downloads 477
878 Importance of Human Factors on Cybersecurity within Organizations: A Study of Attitudes and Behaviours

Authors: Elham Rajabian

Abstract:

The ascent of cybersecurity incidents is a rising threat to most organisations in general, while the impact of the incidents is unique to each of the organizations. It is a need for behavioural sciences to concentrate on employees’ behaviour in order to prepare key security mitigation opinions versus cybersecurity incidents. There are noticeable differences among users of a computer system in terms of complying with security behaviours. We can discuss the people's differences under several subjects such as delaying tactics on something that must be done, the tendency to act without thinking, future thinking about unexpected implications of present-day issues, and risk-taking behaviours in security policies compliance. In this article, we introduce high-profile cyber-attacks and their impacts on weakening cyber resiliency in organizations. We also give attention to human errors that influence network security. Human errors are discussed as a part of psychological matters to enhance compliance with the security policies. The organizational challenges are studied in order to shape a sustainable cyber risks management approach in the related work section. Insiders’ behaviours are viewed as a cyber security gap to draw proper cyber resiliency in section 3. We carry out the best cybersecurity practices by discussing four CIS challenges in section 4. In this regard, we provide a guideline and metrics to measure cyber resilience in organizations in section 5. In the end, we give some recommendations in order to build a cybersecurity culture based on individual behaviours.

Keywords: cyber resilience, human factors, cybersecurity behavior, attitude, usability, security culture

Procedia PDF Downloads 78
877 Evaluation of Vehicle Classification Categories: Florida Case Study

Authors: Ren Moses, Jaqueline Masaki

Abstract:

This paper addresses the need for accurate and updated vehicle classification system through a thorough evaluation of vehicle class categories to identify errors arising from the existing system and proposing modifications. The data collected from two permanent traffic monitoring sites in Florida were used to evaluate the performance of the existing vehicle classification table. The vehicle data were collected and classified by the automatic vehicle classifier (AVC), and a video camera was used to obtain ground truth data. The Federal Highway Administration (FHWA) vehicle classification definitions were used to define vehicle classes from the video and compare them to the data generated by AVC in order to identify the sources of misclassification. Six types of errors were identified. Modifications were made in the classification table to improve the classification accuracy. The results of this study include the development of updated vehicle classification table with a reduction in total error by 5.1%, a step by step procedure to use for evaluation of vehicle classification studies and recommendations to improve FHWA 13-category rule set. The recommendations for the FHWA 13-category rule set indicate the need for the vehicle classification definitions in this scheme to be updated to reflect the distribution of current traffic. The presented results will be of interest to States’ transportation departments and consultants, researchers, engineers, designers, and planners who require accurate vehicle classification information for planning, designing and maintenance of transportation infrastructures.

Keywords: vehicle classification, traffic monitoring, pavement design, highway traffic

Procedia PDF Downloads 167
876 Mathematical Modeling of the Water Bridge Formation in Porous Media: PEMFC Microchannels

Authors: N. Ibrahim-Rassoul, A. Kessi, E. K. Si-Ahmed, N. Djilali, J. Legrand

Abstract:

The static and dynamic formation of liquid water bridges is analyzed using a combination of visualization experiments in a microchannel with a mathematical model. This paper presents experimental and theoretical findings of water plug/capillary bridge formation in a 250 μm squared microchannel. The approach combines mathematical and numerical modeling with experimental visualization and measurements. The generality of the model is also illustrated for flow conditions encountered in manipulation of polymeric materials and formation of liquid bridges between patterned surfaces. The predictions of the model agree favorably the observations as well as with the experimental recordings.

Keywords: green energy, mathematical modeling, fuel cell, water plug, gas diffusion layer, surface of revolution

Procedia PDF Downloads 503
875 Predicting the Impact of Scope Changes on Project Cost and Schedule Using Machine Learning Techniques

Authors: Soheila Sadeghi

Abstract:

In the dynamic landscape of project management, scope changes are an inevitable reality that can significantly impact project performance. These changes, whether initiated by stakeholders, external factors, or internal project dynamics, can lead to cost overruns and schedule delays. Accurately predicting the consequences of these changes is crucial for effective project control and informed decision-making. This study aims to develop predictive models to estimate the impact of scope changes on project cost and schedule using machine learning techniques. The research utilizes a comprehensive dataset containing detailed information on project tasks, including the Work Breakdown Structure (WBS), task type, productivity rate, estimated cost, actual cost, duration, task dependencies, scope change magnitude, and scope change timing. Multiple machine learning models are developed and evaluated to predict the impact of scope changes on project cost and schedule. These models include Linear Regression, Decision Tree, Ridge Regression, Random Forest, Gradient Boosting, and XGBoost. The dataset is split into training and testing sets, and the models are trained using the preprocessed data. Cross-validation techniques are employed to assess the robustness and generalization ability of the models. The performance of the models is evaluated using metrics such as Mean Squared Error (MSE) and R-squared. Residual plots are generated to assess the goodness of fit and identify any patterns or outliers. Hyperparameter tuning is performed to optimize the XGBoost model and improve its predictive accuracy. The feature importance analysis reveals the relative significance of different project attributes in predicting the impact on cost and schedule. Key factors such as productivity rate, scope change magnitude, task dependencies, estimated cost, actual cost, duration, and specific WBS elements are identified as influential predictors. The study highlights the importance of considering both cost and schedule implications when managing scope changes. The developed predictive models provide project managers with a data-driven tool to proactively assess the potential impact of scope changes on project cost and schedule. By leveraging these insights, project managers can make informed decisions, optimize resource allocation, and develop effective mitigation strategies. The findings of this research contribute to improved project planning, risk management, and overall project success.

Keywords: cost impact, machine learning, predictive modeling, schedule impact, scope changes

Procedia PDF Downloads 17
874 Knowledge-Attitude-Practice Survey Regarding High Alert Medication in a Teaching Hospital in Eastern India

Authors: D. S. Chakraborty, S. Ghosh, A. Hazra

Abstract:

Objective: Medication errors are a reality in all settings where medicines are prescribed, dispensed and used. High Alert Medications (HAM) are those that bear a heightened risk of causing significant patient harm when used in error. We conducted a knowledge-attitude-practice survey, among residents working in a teaching hospital, to assess the ground situation with regard to the handling of HAM. Methods: We plan to approach 242 residents among the approximately 600 currently working in the hospital through purposive sampling. Residents in all disciplines (clinical, paraclinical and preclinical) are being targeted. A structured questionnaire that has been pretested on 5 volunteer residents is being used for data collection. The questionnaire is being administered to residents individually through face-to-face interview, by two raters, while they are on duty but not during rush hours. Results: Of the 156 residents approached so far, data from 140 have been analyzed, the rest having refused participation. Although background knowledge exists for the majority of respondents, awareness levels regarding HAM are moderate, and attitude is non-uniform. The number of respondents correctly able to identify most ( > 80%) HAM in three common settings– accident and emergency, obstetrics and intensive care unit are less than 70%. Several potential errors in practice have been identified. The study is ongoing. Conclusions: Situation requires corrective action. There is an urgent need for improving awareness regarding HAM for the sake of patient safety. The pharmacology department can take the lead in designing awareness campaign with support from the hospital administration.

Keywords: high alert medication, medication error, questionnaire, resident

Procedia PDF Downloads 114
873 Pattern of Refractive Error, Knowledge, Attitude and Practice about Eye Health among the Primary School Children in Bangladesh

Authors: Husain Rajib, K. S. Kishor, D. G. Jewel

Abstract:

Background: Uncorrected refractive error is a common cause of preventable visual impairment in pediatric age group which can be lead to blindness but early detection of visual impairment can reduce the problem that will have good effective in education and more involve in social activities. Glasses are the cheapest and commonest form of correction of refractive errors. To achieve this, patient must exhibit good compliance to spectacle wear. Patient’s attitude and perception of glasses and eye health could affect compliance. Material and method: A Prospective community based cross sectional study was designed in order to evaluate the knowledge, attitude and practices about refractive errors and eye health amongst the primary school going children. Result: Among 140 respondents, 72 were males and 68 were females. We found 50 children were myopic and out of them 26 were male and 24 were female, 27 children were hyperopic and out of them 14 were male and 13 were female. About 63 children were astigmatic and out of them 32 were male and 31 were female. The level of knowledge, attitude was satisfactory. The attitude of the students, teachers and parents was cooperative which helps to do cycloplegic refraction. Practice was not satisfactory due to social stigma and information gap. Conclusion: Knowledge of refractive error and acceptance of glasses for the correction of uncorrected refractive error. Public awareness program such as vision screening program, eye camp, and teachers training program are more beneficial for wearing and prescribing spectacle.

Keywords: refractive error, stigma, knowledge, attitude, practice

Procedia PDF Downloads 248
872 Retrospective Demographic Analysis of Patients Lost to Follow-Up from Antiretroviral Therapy in Mulanje Mission Hospital, Malawi

Authors: Silas Webb, Joseph Hartland

Abstract:

Background: Long-term retention of patients on ART has become a major health challenge in Sub-Saharan Africa (SSA). In 2010 a systematic review of 39 papers found that 30% of patients were no longer taking their ARTs two years after starting treatment. In the same review, it was noted that there was a paucity of data as to why patients become lost to follow-up (LTFU) in SSA. This project was performed in Mulanje Mission Hospital in Malawi as part of Swindon Academy’s Global Health eSSC. The HIV prevalence for Malawi is 10.3%, one of the highest rates in the world, however prevalence soars to 18% in the Mulanje. Therefore it is essential that patients at risk of being LTFU are identified early and managed appropriately to help them continue to participate in the service. Methodology: All patients on adult antiretroviral formulations at MMH, who were classified as ‘defaulters’ (patients missing a scheduled follow up visit by more than two months) over the last 12 months were included in the study. Demographic varibales were collected from Mastercards for data analysis. A comparison group of patients currently not lost to follow up was created by using all of the patients who attended the HIV clinic between 18th-22nd July 2016 who had never defaulted from ART. Data was analysed using the chi squared (χ²) test, as data collected was categorical, with alpha levels set at 0.05. Results: Overall, 136 patients had defaulted from ART over the past 12 months at MMH. Of these, 43 patients had missing Mastercards, so 93 defaulter datasets were analysed. In the comparison group 93 datasets were also analysed and statistical analysis done using Chi-Squared testing. A higher proportion of men in the defaulting group was noted (χ²=0.034) and defaulters tended to be younger (χ²=0.052). 94.6% of patients who defaulted were taking Tenofovir, Lamivudine and Efavirenz, the standard first line ART therapy in Malawi. The mean length of time on ART was 39.0 months (RR: -22.4-100.4) in the defaulters group and 47.3 months (RR: -19.71-114.23) in the control group, with a mean difference of 8.3 less months in the defaulters group (χ ²=0.056). Discussion: The findings in this study echo the literature, however this review expands on that and shows the demographic for the patient at most risk of defaulting and being LTFU would be: a young male who has missed more than 4 doses of ART and is within his first year of treatment. For the hospital, this data is important at it identifies significant areas for public health focus. For instance, fear of disclosure and stigma may be disproportionately affecting younger men, so interventions can be aimed specifically at them to improve their health outcomes. The mean length of time on medication was 8.3 months less in the defaulters group, with a p-value of 0.056, emphasising the need for more intensive follow-up in the early stages of treatment, when patients are at the highest risk of defaulting.

Keywords: anti-retroviral therapy, ART, HIV, lost to follow up, Malawi

Procedia PDF Downloads 170
871 Need for a National Newborn Screening Programme in India: Pilot Study Data

Authors: Sudheer Moorkoth, Leslie Edward Lewis, Pragna Rao

Abstract:

Newborn screening (NBS) is a part of routine newborn care in many countries worldwide to detect early any rare treatable conditions and inborn errors of metabolism (IEM). India has not started this program yet. In an attempt to understand the challenges in implementing a national newborn screening program in India, we initiated a pilot newborn screening project funded by the Government of Canada. Along with initiating the newborn screening at Kasturba Hospital, Manipal in South India, for screening six disorders (Congenital Hypothyroidism(CH), Congenital Adrenal Hyperplasia (CAH), Galactosemia, Biotinidase deficiency, Glucose-6-Phosphate Dehydrogenase deficiency (G-6PD) and Phenylketonurea), we also studied the awareness of various stakeholders on the newborn screening. In a period of nine months from August 2017 to March 2018 we could screen 1915 newborns (999 male and 916 female). The result showed that there were seven babies screened positive. This interim result points to an incidence rate of 1 in 270 children for these rare disorders collectively. This includes three confirmed cases of CH, two cases of G-6PD deficiency, and one case each for Galctosemia and CAH. A questionnaire based study to understand the awareness among various stakeholders revealed that there is little awareness among parents, adolescents and anganwadi workers (public health worker). The interim data points to the need for a national newborn screening programme in India. There is also an immediate need to undertake large-scale awareness programme to create knowledge on NBS among the various stakeholders.

Keywords: awareness, inborn errors of metabolism (IEM), newborn screening, rare disease

Procedia PDF Downloads 231
870 Barriers and Opportunities for Implementing Electronic Prescription Software in Public Libyan Hospitals

Authors: Abdelbaset M. Elghriani, Abdelsalam M. Maatuk, Isam Denna, Amira Abdulla Werfalli

Abstract:

Electronic prescription software (e-prescribing) benefits patients and physicians by preventing handwriting errors and giving accurate prescriptions. E-prescribing allows prescriptions to be written and sent to pharmacies electronically instead of using handwritten notes. Significant factors that may affect the adoption of e-prescription systems include lacking technical support, financial resources to operate the systems, and change resistance from some clinicians, which have been identified as barriers to the implementation of e-prescription systems. This study aims to explore the trends and opinions of physicians and pharmacists about e-prescriptions and to identify the obstacles and benefits of the application of e-prescriptions in the health care system. A cross-sectional descriptive study was conducted at three Libyan public hospitals. Data were collected through a self-constructed questionnaire to assess the opinions regarding potential constraining factors and benefits of implementing an e-prescribing system in hospitals. Data presented as mean, frequency distribution table, cross-tabulation, and bar charts. Data analysis was performed, and the results show that technical, financial, and organizational obstacles are the most important obstacles that prevent the application of e-prescribing systems in Libyan hospitals. In addition, there was awareness of the benefits of e-prescribing, especially reducing medication dispensing errors, and a desire of physicians and pharmacists to use electronic prescriptions.

Keywords: physicians, e-prescribing, health care system, pharmacists

Procedia PDF Downloads 111
869 Predicting Football Player Performance: Integrating Data Visualization and Machine Learning

Authors: Saahith M. S., Sivakami R.

Abstract:

In the realm of football analytics, particularly focusing on predicting football player performance, the ability to forecast player success accurately is of paramount importance for teams, managers, and fans. This study introduces an elaborate examination of predicting football player performance through the integration of data visualization methods and machine learning algorithms. The research entails the compilation of an extensive dataset comprising player attributes, conducting data preprocessing, feature selection, model selection, and model training to construct predictive models. The analysis within this study will involve delving into feature significance using methodologies like Select Best and Recursive Feature Elimination (RFE) to pinpoint pertinent attributes for predicting player performance. Various machine learning algorithms, including Random Forest, Decision Tree, Linear Regression, Support Vector Regression (SVR), and Artificial Neural Networks (ANN), will be explored to develop predictive models. The evaluation of each model's performance utilizing metrics such as Mean Squared Error (MSE) and R-squared will be executed to gauge their efficacy in predicting player performance. Furthermore, this investigation will encompass a top player analysis to recognize the top-performing players based on the anticipated overall performance scores. Nationality analysis will entail scrutinizing the player distribution based on nationality and investigating potential correlations between nationality and player performance. Positional analysis will concentrate on examining the player distribution across various positions and assessing the average performance of players in each position. Age analysis will evaluate the influence of age on player performance and identify any discernible trends or patterns associated with player age groups. The primary objective is to predict a football player's overall performance accurately based on their individual attributes, leveraging data-driven insights to enrich the comprehension of player success on the field. By amalgamating data visualization and machine learning methodologies, the aim is to furnish valuable tools for teams, managers, and fans to effectively analyze and forecast player performance. This research contributes to the progression of sports analytics by showcasing the potential of machine learning in predicting football player performance and offering actionable insights for diverse stakeholders in the football industry.

Keywords: football analytics, player performance prediction, data visualization, machine learning algorithms, random forest, decision tree, linear regression, support vector regression, artificial neural networks, model evaluation, top player analysis, nationality analysis, positional analysis

Procedia PDF Downloads 24
868 Ghost Frequency Noise Reduction through Displacement Deviation Analysis

Authors: Paua Ketan, Bhagate Rajkumar, Adiga Ganesh, M. Kiran

Abstract:

Low gear noise is an important sound quality feature in modern passenger cars. Annoying gear noise from the gearbox is influenced by the gear design, gearbox shaft layout, manufacturing deviations in the components, assembly errors and the mounting arrangement of the complete gearbox. Geometrical deviations in the form of profile and lead errors are often present on the flanks of the inspected gears. Ghost frequencies of a gear are very challenging to identify in standard gear measurement and analysis process due to small wavelengths involved. In this paper, gear whine noise occurring at non-integral multiples of gear mesh frequency of passenger car gearbox is investigated and the root cause is identified using the displacement deviation analysis (DDA) method. DDA method is applied to identify ghost frequency excitations on the flanks of gears arising out of generation grinding. Frequency identified through DDA correlated with the frequency of vibration and noise on the end-of-line machine as well as vehicle level measurements. With the application of DDA method along with standard lead profile measurement, gears with ghost frequency geometry deviations were identified on the production line to eliminate defective parts and thereby eliminate ghost frequency noise from a vehicle. Further, displacement deviation analysis can be used in conjunction with the manufacturing process simulation to arrive at suitable countermeasures for arresting the ghost frequency.

Keywords: displacement deviation analysis, gear whine, ghost frequency, sound quality

Procedia PDF Downloads 125
867 Prediction of Time to Crack Reinforced Concrete by Chloride Induced Corrosion

Authors: Anuruddha Jayasuriya, Thanakorn Pheeraphan

Abstract:

In this paper, a review of different mathematical models which can be used as prediction tools to assess the time to crack reinforced concrete (RC) due to corrosion is investigated. This investigation leads to an experimental study to validate a selected prediction model. Most of these mathematical models depend upon the mechanical behaviors, chemical behaviors, electrochemical behaviors or geometric aspects of the RC members during a corrosion process. The experimental program is designed to verify the accuracy of a well-selected mathematical model from a rigorous literature study. Fundamentally, the experimental program exemplifies both one-dimensional chloride diffusion using RC squared slab elements of 500 mm by 500 mm and two-dimensional chloride diffusion using RC squared column elements of 225 mm by 225 mm by 500 mm. Each set consists of three water-to-cement ratios (w/c); 0.4, 0.5, 0.6 and two cover depths; 25 mm and 50 mm. 12 mm bars are used for column elements and 16 mm bars are used for slab elements. All the samples are subjected to accelerated chloride corrosion in a chloride bath of 5% (w/w) sodium chloride (NaCl) solution. Based on a pre-screening of different models, it is clear that the well-selected mathematical model had included mechanical properties, chemical and electrochemical properties, nature of corrosion whether it is accelerated or natural, and the amount of porous area that rust products can accommodate before exerting expansive pressure on the surrounding concrete. The experimental results have shown that the selected model for both one-dimensional and two-dimensional chloride diffusion had ±20% and ±10% respective accuracies compared to the experimental output. The half-cell potential readings are also used to see the corrosion probability, and experimental results have shown that the mass loss is proportional to the negative half-cell potential readings that are obtained. Additionally, a statistical analysis is carried out in order to determine the most influential factor that affects the time to corrode the reinforcement in the concrete due to chloride diffusion. The factors considered for this analysis are w/c, bar diameter, and cover depth. The analysis is accomplished by using Minitab statistical software, and it showed that cover depth is the significant effect on the time to crack the concrete from chloride induced corrosion than other factors considered. Thus, the time predictions can be illustrated through the selected mathematical model as it covers a wide range of factors affecting the corrosion process, and it can be used to predetermine the durability concern of RC structures that are vulnerable to chloride exposure. And eventually, it is further concluded that cover thickness plays a vital role in durability in terms of chloride diffusion.

Keywords: accelerated corrosion, chloride diffusion, corrosion cracks, passivation layer, reinforcement corrosion

Procedia PDF Downloads 203
866 The Effect of Dark energy on Amplitude of Gravitational Waves

Authors: Jafar Khodagholizadeh

Abstract:

In this talk, we study the tensor mode equation of perturbation in the presence of nonzero $-\Lambda$ as dark energy, whose dynamic nature depends on the Hubble parameter $ H$ and/or its time derivative. Dark energy, according to the total vacuum contribution, has little effect during the radiation-dominated era, but it reduces the squared amplitude of gravitational waves (GWs) up to $60\%$ for the wavelengths that enter the horizon during the matter-dominated era. Moreover, the observations bound on dark energy models, such as running vacuum model (RVM), generalized running vacuum model (GRVM), and generalized running vacuum subcase (GRVS), are effective in reducing the GWs’ amplitude. Although this effect is less for the wavelengths that enter the horizon at later times, this reduction is stable and permanent.

Keywords: gravitational waves, dark energy, GW's amplitude, all stage universe

Procedia PDF Downloads 140
865 A Comparative Study of Various Control Methods for Rendezvous of a Satellite Couple

Authors: Hasan Basaran, Emre Unal

Abstract:

Formation flying of satellites is a mission that involves a relative position keeping of different satellites in the constellation. In this study, different control algorithms are compared with one another in terms of ΔV, velocity increment, and tracking error. Various control methods, covering continuous and impulsive approaches are implemented and tested for satellites flying in low Earth orbit. Feedback linearization, sliding mode control, and model predictive control are designed and compared with an impulsive feedback law, which is based on mean orbital elements. Feedback linearization and sliding mode control approaches have identical mathematical models that include second order Earth oblateness effects. The model predictive control, on the other hand, does not include any perturbations and assumes circular chief orbit. The comparison is done with 4 different initial errors and achieved with velocity increment, root mean square error, maximum steady state error, and settling time. It was observed that impulsive law consumed the least ΔV, while produced the highest maximum error in the steady state. The continuous control laws, however, consumed higher velocity increments and produced lower amounts of tracking errors. Finally, the inversely proportional relationship between tracking error and velocity increment was established.

Keywords: chief-deputy satellites, feedback linearization, follower-leader satellites, formation flight, fuel consumption, model predictive control, rendezvous, sliding mode

Procedia PDF Downloads 86
864 Optical Variability of Faint Quasars

Authors: Kassa Endalamaw Rewnu

Abstract:

The variability properties of a quasar sample, spectroscopically complete to magnitude J = 22.0, are investigated on a time baseline of 2 years using three different photometric bands (U, J and F). The original sample was obtained using a combination of different selection criteria: colors, slitless spectroscopy and variability, based on a time baseline of 1 yr. The main goals of this work are two-fold: first, to derive the percentage of variable quasars on a relatively short time baseline; secondly, to search for new quasar candidates missed by the other selection criteria; and, thus, to estimate the completeness of the spectroscopic sample. In order to achieve these goals, we have extracted all the candidate variable objects from a sample of about 1800 stellar or quasi-stellar objects with limiting magnitude J = 22.50 over an area of about 0.50 deg2. We find that > 65% of all the objects selected as possible variables are either confirmed quasars or quasar candidates on the basis of their colors. This percentage increases even further if we exclude from our lists of variable candidates a number of objects equal to that expected on the basis of `contamination' induced by our photometric errors. The percentage of variable quasars in the spectroscopic sample is also high, reaching about 50%. On the basis of these results, we can estimate that the incompleteness of the original spectroscopic sample is < 12%. We conclude that variability analysis of data with small photometric errors can be successfully used as an efficient and independent (or at least auxiliary) selection method in quasar surveys, even when the time baseline is relatively short. Finally, when corrected for the different intrinsic time lags corresponding to a fixed observed time baseline, our data do not show a statistically significant correlation between variability and either absolute luminosity or redshift.

Keywords: nuclear activity, galaxies, active quasars, variability

Procedia PDF Downloads 61
863 Reducing Diagnostic Error in Australian Emergency Departments Using a Behavioural Approach

Authors: Breanna Wright, Peter Bragge

Abstract:

Diagnostic error rates in healthcare are approximately 10% of cases. Diagnostic errors can cause patient harm due to inappropriate, inadequate or delayed treatment, and such errors contribute heavily to medical liability claims globally. Therefore, addressing diagnostic error is a high priority. In most cases, diagnostic errors are the result of faulty information synthesis rather than lack of knowledge. Specifically, the majority of diagnostic errors involve cognitive factors, and in particular, cognitive biases. Emergency Departments are an environment with heightened risk of diagnostic error due to time and resource pressures, a frequently chaotic environment, and patients arriving undifferentiated and with minimal context. This project aimed to develop a behavioural, evidence-informed intervention to reduce diagnostic error in Emergency Departments through co-design with emergency physicians, insurers, researchers, hospital managers, citizens and consumer representatives. The Forum Process was utilised to address this aim. This involves convening a small (4 – 6 member) expert panel to guide a focused literature and practice review; convening of a 10 – 12 person citizens panel to gather perspectives of laypeople, including those affected by misdiagnoses; and a 18 – 22 person structured stakeholder dialogue bringing together representatives of the aforementioned stakeholder groups. The process not only provides in-depth analysis of the problem and associated behaviours, but brings together expertise and insight to facilitate identification of a behaviour change intervention. Informed by the literature and practice review, the Citizens Panel focused on eliciting the values and concerns of those affected or potentially affected by diagnostic error. Citizens were comfortable with diagnostic uncertainty if doctors were honest with them. They also emphasised the importance of open communication between doctors and patients and their families. Citizens expect more consistent standards across the state and better access for both patients and their doctors to patient health information to avoid time-consuming re-taking of long patient histories and medication regimes when re-presenting at Emergency Departments and to reduce the risk of unintentional omissions. The structured Stakeholder Dialogue focused on identifying a feasible behavioural intervention to review diagnoses in Emergency Departments. This needed to consider the role of cognitive bias in medical decision-making; contextual factors (in Victoria, there is a legislated 4-hour maximum time between ED triage and discharge / hospital admission); resource availability; and the need to ensure the intervention could work in large metropolitan as well as small rural and regional ED settings across Victoria. The identified behavioural intervention will be piloted in approximately ten hospital EDs across Victoria, Australia. This presentation will detail the findings of all review and consultation activities, describe the behavioural intervention developed and present results of the pilot trial.

Keywords: behavioural intervention, cognitive bias, decision-making, diagnostic error

Procedia PDF Downloads 112
862 Development of a Real Time Axial Force Measurement System and IoT-Based Monitoring for Smart Bearing

Authors: Hassam Ahmed, Yuanzhi Liu, Yassine Selami, Wei Tao, Hui Zhao

Abstract:

The purpose of this research is to develop a real time axial force measurement system for a smart bearing through the use of strain-gauges, whereby the data acquisition is performed by an Arduino microcontroller due to its easy manipulation and low-cost. The measured signal is acquired and then discretized using a Wheatstone Bridge and an Analog-Digital Converter (ADC) respectively. For bearing monitoring, a real time monitoring system based on Internet of things (IoT) and Bluetooth were developed. Experimental tests were performed on a bearing within a force range up to 600 kN. The experimental results show that there is a proportional linear relationship between the applied force and the output voltage, and the error R squared is within 0.9878 based on the regression analysis.

Keywords: bearing, force measurement, IoT, strain gauge

Procedia PDF Downloads 127
861 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators

Authors: M. A. Okezue, K. L. Clase, S. R. Byrn

Abstract:

The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.

Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets

Procedia PDF Downloads 155
860 Reduction of Impulsive Noise in OFDM System using Adaptive Algorithm

Authors: Alina Mirza, Sumrin M. Kabir, Shahzad A. Sheikh

Abstract:

The Orthogonal Frequency Division Multiplexing (OFDM) with high data rate, high spectral efficiency and its ability to mitigate the effects of multipath makes them most suitable in wireless application. Impulsive noise distorts the OFDM transmission and therefore methods must be investigated to suppress this noise. In this paper, a State Space Recursive Least Square (SSRLS) algorithm based adaptive impulsive noise suppressor for OFDM communication system is proposed. And a comparison with another adaptive algorithm is conducted. The state space model-dependent recursive parameters of proposed scheme enables to achieve steady state mean squared error (MSE), low bit error rate (BER), and faster convergence than that of some of existing algorithm.

Keywords: OFDM, impulsive noise, SSRLS, BER

Procedia PDF Downloads 437
859 Estimating Cyclone Intensity Using INSAT-3D IR Images Based on Convolution Neural Network Model

Authors: Divvela Vishnu Sai Kumar, Deepak Arora, Sheenu Rizvi

Abstract:

Forecasting a cyclone through satellite images consists of the estimation of the intensity of the cyclone and predicting it before a cyclone comes. This research work can help people to take safety measures before the cyclone comes. The prediction of the intensity of a cyclone is very important to save lives and minimize the damage caused by cyclones. These cyclones are very costliest natural disasters that cause a lot of damage globally due to a lot of hazards. Authors have proposed five different CNN (Convolutional Neural Network) models that estimate the intensity of cyclones through INSAT-3D IR images. There are a lot of techniques that are used to estimate the intensity; the best model proposed by authors estimates intensity with a root mean squared error (RMSE) of 10.02 kts.

Keywords: estimating cyclone intensity, deep learning, convolution neural network, prediction models

Procedia PDF Downloads 100
858 Analysis of the Level of Production Failures by Implementing New Assembly Line

Authors: Joanna Kochanska, Dagmara Gornicka, Anna Burduk

Abstract:

The article examines the process of implementing a new assembly line in a manufacturing enterprise of the household appliances industry area. At the initial stages of the project, a decision was made that one of its foundations should be the concept of lean management. Because of that, eliminating as many errors as possible in the first phases of its functioning was emphasized. During the start-up of the line, there were identified and documented all production losses (from serious machine failures, through any unplanned downtime, to micro-stops and quality defects). During 6 weeks (line start-up period), all errors resulting from problems in various areas were analyzed. These areas were, among the others, production, logistics, quality, and organization. The aim of the work was to analyze the occurrence of production failures during the initial phase of starting up the line and to propose a method for determining their critical level during its full functionality. There was examined the repeatability of the production losses in various areas and at different levels at such an early stage of implementation, by using the methods of statistical process control. Based on the Pareto analysis, there were identified the weakest points in order to focus improvement actions on them. The next step was to examine the effectiveness of the actions undertaken to reduce the level of recorded losses. Based on the obtained results, there was proposed a method for determining the critical failures level in the studied areas. The developed coefficient can be used as an alarm in case of imbalance of the production, which is caused by the increased failures level in production and production support processes in the period of the standardized functioning of the line.

Keywords: production failures, level of production losses, new production line implementation, assembly line, statistical process control

Procedia PDF Downloads 114
857 InSAR Times-Series Phase Unwrapping for Urban Areas

Authors: Hui Luo, Zhenhong Li, Zhen Dong

Abstract:

The analysis of multi-temporal InSAR (MTInSAR) such as persistent scatterer (PS) and small baseline subset (SBAS) techniques usually relies on temporal/spatial phase unwrapping (PU). Unfortunately, it always fails to unwrap the phase for two reasons: 1) spatial phase jump between adjacent pixels larger than π, such as layover and high discontinuous terrain; 2) temporal phase discontinuities such as time varied atmospheric delay. To overcome these limitations, a least-square based PU method is introduced in this paper, which incorporates baseline-combination interferograms and adjacent phase gradient network. Firstly, permanent scatterers (PS) are selected for study. Starting with the linear baseline-combination method, we obtain equivalent 'small baseline inteferograms' to limit the spatial phase difference. Then, phase different has been conducted between connected PSs (connected by a specific networking rule) to suppress the spatial correlated phase errors such as atmospheric artifact. After that, interval phase difference along arcs can be computed by least square method and followed by an outlier detector to remove the arcs with phase ambiguities. Then, the unwrapped phase can be obtained by spatial integration. The proposed method is tested on real data of TerraSAR-X, and the results are also compared with the ones obtained by StaMPS(a software package with 3D PU capabilities). By comparison, it shows that the proposed method can successfully unwrap the interferograms in urban areas even when high discontinuities exist, while StaMPS fails. At last, precise DEM errors can be got according to the unwrapped interferograms.

Keywords: phase unwrapping, time series, InSAR, urban areas

Procedia PDF Downloads 137
856 Optimization of Biodiesel Production from Sunflower Oil Using Central Composite Design

Authors: Pascal Mwenge, Jefrey Pilusa, Tumisang Seodigeng

Abstract:

The current study investigated the effect of catalyst ratio and methanol to oil ratio on biodiesel production by using central composite design. Biodiesel was produced by transesterification using sodium hydroxide as a homogeneous catalyst, a laboratory scale reactor consisting of flat bottom flask mounts with a reflux condenser and a heating plate was used to produce biodiesel. Key parameters, including, time, temperature and mixing rate were kept constant at 60 minutes, 60 oC and 600 RPM, respectively. From the results obtained, it was observed that the biodiesel yield depends on catalyst ratio and methanol to oil ratio. The highest yield of 50.65% was obtained at catalyst ratio of 0.5 wt.% and methanol to oil mole ratio 10.5. The analysis of variances of biodiesel yield showed the R Squared value of 0.8387. A quadratic mathematical model was developed to predict the biodiesel yield in the specified parameters ranges.

Keywords: ANOVA, biodiesel, catalyst, CCD, transesterification

Procedia PDF Downloads 188
855 Performance Analysis of Artificial Neural Network with Decision Tree in Prediction of Diabetes Mellitus

Authors: J. K. Alhassan, B. Attah, S. Misra

Abstract:

Human beings have the ability to make logical decisions. Although human decision - making is often optimal, it is insufficient when huge amount of data is to be classified. medical dataset is a vital ingredient used in predicting patients health condition. In other to have the best prediction, there calls for most suitable machine learning algorithms. This work compared the performance of Artificial Neural Network (ANN) and Decision Tree Algorithms (DTA) as regards to some performance metrics using diabetes data. The evaluations was done using weka software and found out that DTA performed better than ANN. Multilayer Perceptron (MLP) and Radial Basis Function (RBF) were the two algorithms used for ANN, while RegTree and LADTree algorithms were the DTA models used. The Root Mean Squared Error (RMSE) of MLP is 0.3913,that of RBF is 0.3625, that of RepTree is 0.3174 and that of LADTree is 0.3206 respectively.

Keywords: artificial neural network, classification, decision tree algorithms, diabetes mellitus

Procedia PDF Downloads 390